r/SneerClub archives
newest
bestest
longest
Quality sneer in comments: "I’ve read a few things by Bostrom, which just strike me as the philosophical equivalent of math proofs that 1 = 0." (https://crookedtimber.org/2022/11/13/a-rant-on-ftx-william-macaskill-and-utilitarianism/)
41

Like a perpetual motion device gets energy even though every component loses energy, he comes up with arguments that tell you something about the future without having any information.

edit: maybe it’s more like a proof that x=1 but with no facts about x at all coming in.

They also apply it too selectively, e.g. they nearly convince themselves they’re in a simulation one day, and another day they don’t discount their Pascal’s wager nonsense about 10^50 future people at all even though you’d think positions of such influence should be rather rare.

For what it’s worth I think the question of “where and when am I in an imaginary world” is rather wrong; you are where you are and there are different possibilities of what is around you or what happened in the past. Maybe there’s ancient aliens in the past, maybe there’s future ourselves in the past who made simulations of us for no discernible purpose, the latter is frankly even sillier than ancient aliens.

Treating it as locating yourself in an imaginary universe requires taking that imaginary universe on faith first, which seems to be what rationalists do entirely too much.

edit: I think in literally all of his writings that I bothered to look at, the evidence in support of (an incredibly specific) hypothesis is coming from within the hypothesis itself. For all the talk of maps and territories, rationalists seem extremely bad at even distinguishing between the task of mapping and the task of locating something on an existing map.

> Treating it as locating yourself in an imaginary universe requires taking that imaginary universe on faith first, which seems to be what rationalists do entirely too much. Yeah, I'm not sure if that's the right comparison but it seems to me to basically be the ontological argument for God (perfection includes existence) rebranded for nerds. At some point you're just playing word games.
> hypothesis is coming from within the hypothesis itself Yeah that is the problem with a lot of the simulation theory stuff. (Esp if you get it second hand from somebody like Musk, who upped the % that we live in a sim by a few factors because he changed the 'amount of people who will be running ancestor simulations in the future' value. (An aside, nobody as far as I know is running any ancestor simulations, the sims, dwarf fortress, rimworld, and even something like [Ultima Ratio Regum](https://www.markrjohnsongames.com/games/ultima-ratio-regum/) isn't an ancestor simulation (they also are not really possible on Turing machines due to complexity theory, which brings me into my next point)) These simulations are also almost impossible to do (esp if you don't want it to be obvious from the inside that it is a simulation) using our current computer paradigm, complexity theory will eat all the processing power, unless you assume that they specifically created Turing machines to bind our computing and theirs isn't and then you are back into 'god/satan put the fossils in the ground to trick us' territory (this argument also wrecks the 'we will run ancestor sims in the future' argument).
> (esp if you don't want it to be obvious from the inside that it is a simulation) Eh. My favorite take in simulation theory (not that i necessarily believe in it) is that it is actually really obvious we live in one, but whoever made the sim hardcoded it so that we could never realize it. So much (simulated) manpower and effort spent trying to rationalize physics that ultimately can just be explained by limited server capacity or something, and we'd never know - I think that'd be hilarious!
Yeah the problem is here that 'just hardcode it so they never realize it' isn't that easy. So we are back to the devs having literal godlike powers it is just religion with extra steps. Our irl sims break all the time. (My favorite story is genetic algorithms flying by glitching into the floor)
The whole universe used to be populated on the initial design docs, but right now they can only just keep 1 planet running next to the few billion calls to cancel_sim_awareness_and_restart_train_of_thought() every second. Oops!
> We are all aware that the senses can be deceived, the eyes fooled. But how can we be sure our senses are not being deceived at any particular time, or even all the time? Might I just be a brain in a tank somewhere, tricked all my life into believing in the events of this world by some insane computer? And does my life gain or lose meaning based on my reaction to such solipsism? > Project PYRRHO, Specimen 46, Vat 7. Activity recorded M.Y. 2302.22467. (TERMINATION OF SPECIMEN ADVISED)
Yeah. What I mean is that even if we believed that simulations are likely, he still has a serious logical flaw there. Specifically, the number of ancestor simulations, is a number inside the hypothesis itself, and then somehow it comes *outside of that hypothesis* and starts discounting a *different hypothesis* where nobody is running any simulations but maybe will in the future. Which seems rather stupid because of course you can have very large numbers inside a "hypothesis" very easily at a very minor complexity cost (also see Busy Beaver Turing machines). In fact you can even have an infinity inside a hypothesis, because we are not entirely certain we are limited to the finite. edit: also if there's a hypothesis that there's a trillion simulations of me in the future, that really ought to be a trillion distinct hypotheses, each needing log2(trillion) bits of extra information to pick a specific instance of me, each correspondingly discounted by 1/trillion. Even if we take it as a given that the future is very likely to build a very large number of simulations, it still doesn't follow that it is likely that in our past our ancestors has built simulations and we are in one, that is if you believe that hypotheses requiring more information to describe are correspondingly less likely (which you need to make probabilities add up to 1, but when has a rationalist ever been concerned with that?). edit: e.g. suppose there's a trillion future copies of me with numbers tattooed on their foreheads, from 1 to 1 trillion. So I have a trillion hypotheses about what number I'll see when I look in the mirror. I don't know about Bostrom but if I am guessing a number from 1 to a trillion, and if its uniformly distributed, the chance of guessing it right is one in a trillion (note that the non simulation hypothesis doesn't involve simulations at all or guessing which simulation you are). The number of future simulations simply cancels out if they got numbers on their foreheads, why in the bleep would it not cancel out if they don't? He wants to treat "it is really the year 2022" as a wild guess within the "it is year 3022 and there's a gazillion simulations plus there was one real you in the past", which is a different hypothesis. Just because the 3022 hypothesis also makes an additional guess that this 3022 got history that includes 2022 which is exactly same as ours, should only make 3022 hypothesis less probable.
> Even if we take it as a given that the future is very likely to build a very large number of simulations, it still doesn't follow that it is likely that in our past our ancestors has built simulations and we are in one, that is if you believe that hypotheses requiring more information to describe are correspondingly less likely I think Bostrom always implicitly assumes the [B-theory of time](https://www.rep.routledge.com/articles/thematic/time-metaphysics-of/v-2/sections/the-a-theory-and-the-b-theory) when he does his anthropic calculations--the idea that there is no objective observer-independent truth about what moment is the "present" ('now' is treated as an observer-relative term like 'here'), hence no objective sense in which the future is any less real than the past or present. Imagine a godlike perspective where the whole of 4D spacetime is seen all at once, with all the beings who ever exist within it treated as equally real, then he's saying you're equally likely to be any of the beings in it whose type of experience of the world is compatible with your own. Of course it's an open question whether that godlike perspective would find that simulated people outnumber biological people across spacetime, but if you just take that as a given as you suggested, then under the B theory you don't need an additional hypothesis about whether they exist in "the future" or "the past". For a non B theorist, suppose you just took it as a given that there was an alien civilization *right now* performing trillions of conscious simulations of humans on Earth, along with 8 billion biological humans on Earth--if you don't know for sure which one you are, it seems like if you accept any kind of anthropic calculations you should consider it more likely you're one of the simulations, you don't need any additional hypothesis that the simulations are happening "here" rather than "far away". >edit: e.g. suppose there's a trillion future copies of me with numbers tattooed on their foreheads, from 1 to 1 trillion. So I have a trillion hypotheses about what number I'll see when I look in the mirror. I don't know about Bostrom but if I am guessing a number from 1 to a trillion, and if its uniformly distributed, the chance of guessing it right is one in a trillion (note that the non simulation hypothesis doesn't involve simulations at all or guessing which simulation you are). The number of future simulations simply cancels out if they got numbers on their foreheads, why in the bleep would it not cancel out if they don't? What do you mean it cancels out? If you don't yet know your own number, and all the trillion copies are treated as equally real (either you adopt the B-theory of time, or you imagine an alternate scenario where all the trillion copies exist *right now*), then for example there would be 9 times as many copies with a number over than 100 billion than there are copies with a number 100 billion or less, so if you had to bet on the outcome of looking in the mirror and seeing your number, wouldn't it make sense to say the odds are 9:1 that your own number will be over 100 billion?
> For a non B theorist, suppose you just took it as a given that there was an alien civilization right now performing trillions of conscious simulations of humans on Earth, along with 8 billion biological humans on Earth--if you don't know for sure which one you are, it seems like if you accept any kind of anthropic calculations you should consider it more likely you're one of the simulations, you don't need any additional hypothesis that the simulations are happening "here" rather than "far away". Nope. Even if I take that as a given, I still have trillions + 8 billions hypotheses about the environment around me, and each such hypothesis is to be assigned a-priori probability, with the commonly accepted way to assign probabilities is such that they are on the order of 2^-N where N is the number of bits required to describe that specific hypothesis. Nothing is treated as more or less "real", the hypotheses are treated as more or less *probable* depending on their complexity. I think where things go wrong, here, is this informal confusion of a scientific hypothesis (predicts future observations) with a theological worldview (can just postulate a God's view and leave it at that). I think what's happening here is that you have a hypothesis space with a complexity prior, and then you take a rather arbitrary subset of that hypothesis space (the one with aliens) and you assign equal probabilities to hypotheses within that subset. The complexity prior of the original space wouldn't give those hypotheses equal probabilities, but after you take them out of context it is no longer clear that you need a complexity prior. Yeah, sometimes it is appropriate that a subset has equal probabilities, e.g. for a fair die, but have you ever tried to actually make a fair die out of something, by hand? Suppose you tossed the die around inside a box, enough to butterfly-effect minute quantum fluctuations up to macroscale. You then open the box. Suppose you take it as granted that there's going to be 6 copies of you who see each side of the die. They're all real and all coexist, I dunno you're all inside alien simulator or MWI is true or whatever. Should they get equal probabilities? Well not without the hard work of cutting a fair die, not in the context of where you understand that the narrow side of the die is less likely to be landed on.
> Nope. Even if I take that as a given, I still have trillions + 8 billions hypotheses about the environment around me Isn't it usually true that statements about the probability of a given outcome can be reconceived as being about *classes* of more fine-grained distinct physical outcomes? For example, if I want to talk about the probability a coin will land "heads", I'm lumping together many distinct possible physical states with the coin landing heads-up on different positions of the floor, not to mention a much huger number of detailed microphysical states if we consider the precise state of all the particles making up the coin and its environment. >and each such hypothesis is to be assigned a-priori probability, with the commonly accepted way to assign probabilities is such that they are on the order of 2-N where N is the number of bits required to describe that specific hypothesis. Are you specifically talking about [Solomonoff induction](https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference)? That's one way of dealing with probability that's popular with the LessWrong crowd, but I wouldn't say it's "*the* commonly accepted way to assign probabilities" and I doubt that Bostrom is thinking in these terms. It seems to me that Bostrom tends to take a lot of his cues from the natural sciences, and in the natural sciences when people talk about probabilities in a theoretical context (like quantum theory, or classical statistical mechanics, or the biological definition of 'fitness' in terms of the probability a given genome will survive and reproduce in some environment), I think it's usually implicit that they are talking about the long-term frequencies if a given type of experiment could be repeated an unlimited number of times (what philosophers call 'hypothetical frequentism'). For example, when Stephen Jay Gould and Simon Conway Morris had their theoretical disagreement about the degree of contingency vs. convergence in biological evolution, Gould stated it in terms of the thought-experiment of rewinding history to some very early era and then "replaying life's tape" multiple times, perhaps with some small perturbation in each run. This is a thought-experiment that's obviously not possible in practice, but it's a good way of giving objective meaning to questions about the "probability" of different outcomes in evolution, say, large-brained animals that can use tools and language to develop technology similar to ours (on the subject of the likelihood of intelligent beings evolving, Gould's opinion was that 'any replay, altered by an apparently insignificant jot or tittle at the outset, would have yielded an equally sensible and resolvable outcome of entirely different form, but most displeasing to our vanity *in the absence of self-conscious life.*') Discussions of the Drake equation and the Fermi paradox also seem to be usually assuming a frequentist definition when they talk about the probability of other star systems getting through various milestones on the way to technological civilization. In Bostrom's [original paper](https://www.simulation-argument.com/simulation.pdf) he phrased his argument in terms of a trilemma where the first two options were statements about how "likely" certain types of long-term outcomes for technological civilizations might be: *This paper argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation.* So here I interpret "very likely" and "extremely unlikely" to similarly implicitly be frequentist statements about what we might see if we could observe a very large sample of civilizations at a similar technological level to ours today, and what their "typical" future evolution is like, or the thought-experiment of repeatedly replaying human civilization's own future evolution starting at the present day (perhaps with a minor perturbation in each one, as in Gould's 'altered by an apparently insignificant jot or tittle at the outset'). Since these are frequentist notions of what it means for an outcome to be "likely" or "unlikely", there's no need to be concerned with the algorithmic complexity of different possible outcomes, which is specific to certain more subjectivist definitions of probability.
> Isn't it usually true that statements about the probability of a given outcome can be reconceived as being about classes of more fine-grained distinct physical outcomes? For example, if I want to talk about the probability a coin will land "heads", I'm lumping together many distinct possible physical states with the coin landing heads-up on different positions of the floor, not to mention a much huger number of detailed microphysical states if we consider the precise state of all the particles making up the coin and its environment. So what is landing here, exactly? An immortal soul cast from heaven, to inhabit a (simulated or not) body? Surely not. Us being in a simulation isn't going to be a distinct physical outcome without a soul being a physically distinguishable property. > Are you specifically talking about Solomonoff induction Nothing that specific. Any notion that a hypothesis is something that makes predictions, and any kind of Occam's razor would suffice here. If the simulations are forever indistinguishable from reality, we're done because they don't make any distinct predictions. If they are distinguishable, it's a question about prior probability over whether you'll see that distinction or not. If you start frequentism style running experiments you'll have beings observing that they are in a simulation and beings observing they are not in a simulation, you won't have an either or. Let me address the trilemma first. Suppose it is 50% likely that humanity will construct a very large number of simulations of the past. That seems consistent with the trilemma: we are not "very likely" to go extinct, it's a coin toss. Now onto 3. How does it become any more than 50% likely that we are in a simulation? If you run an experiment 100 times, about 50 times there won't be any simulations at all. How do we get "almost certainly living in a computer simulation"? It is as I said in the earlier post: a large number leaks out of a hypothesis, and goes on a rampage against other hypotheses. There's another issue. What is the distinction of "we are in a simulation" and "we are in real world" from God's point of view? Unless you also postulate physical souls, there isn't any; from that omniscient view, we are in both. I don't see any way to get to "almost certainly living in a computer simulation" without making a very large number of additional unexamined assumptions - immortal souls, immortal soul placement being equiprobable, etc etc etc. So, basically, for it to be frequentist probabilities you need immortal souls. For it to be subjective probabilities, well it's about how you assign priors and there's absolutely no reason for priors to be equivalent for fairly different hypotheses. edit: to summarize, I think he is just conflating objective and subjective perspectives on probability, switching between them as fits the argument. Let's suppose that there's 1 you here, and 99 copies of you simulated by aliens, who will show you a red text "SIMULATION", 10 minutes from now. Okay, we are running this experiment 1000 times. "We" get the same outcome. Each time. There's 99 simulated yous seeing the text, and one real you not seeing the text. (Unless you believe that each time a soul gets cast from heaven and lands somewhere on random, and simulations are equally deserving of a soul, because you are an enlightened, very very Christian atheist). But wait, you can say. You, the actual you, this specific you is either going to see the red "SIMULATION" or not. That is a subjective question. That is a question about what prior probabilities you should assign to your hypotheses. You can not repeat that experiment 1000 times, unless we also throw reincarnation into the mix. It may seem natural to assign equal probabilities to those hypotheses, when looking at this in isolation, and not as a part of some much larger approach to assigning probabilities to hypotheses (which may very well assign much higher probabilities to models of the world that are not contained inside another world, as part of any kind of Occam's razor whatsoever).

The comment in the blog post about ‘esoteric morality’ reflects what I’ve been thinking. A Esoteric-Exoteric gap is a terrible idea when you are trying to popularise your ethical system.

What if it doesn't need to be popular to wield influence? They just need to capture the bigwigs eg SBF. Isn't that the deal with Yarvin/IDW and Thiel or other cryptofascist techbro types?
In other words, just kinda wack that talking about having a esoteric-gap is basically part of the longtermist public image. It's like trying to create a nobel lie by saying you are creating a noble lie
Yeah i kinda mixed up "I dont think it's a good idea for an ethical system" and pragmatic value. Pragmatically fostering individuals might be more successful. Basically what an Esoteric-Exoteric gap is designed for. But it's notable they try both anyways and the result to me is the gap is far more in your face than with say, Straussians. Thiel and Yarvin in particular seem to have become more invested in popularity, ie. Yarvin on fox news, dropping the moldbug name, Thiel embedding the phrase "Dimes Square" into the hip rich kids art world. IDW ive never seen as a big-wig phenomenon.
"IDW ive never seen as a big-wig phenomenon." Now that you mention it, I agree that IDW has no shortage of foot-soldiers. Thanks. Sorry I'm on mobile and can't do the fancy comment quotes.

Yeah, this is what I got more and more of the deeper I looked.