r/SneerClub archives
newest
bestest
longest
56

“I hate this whole rationality thing. If you actually take the basic assumptions of rationality seriously (as in Bayesian inference, complexity theory, algorithmic views of minds), you end up with an utterly insane universe full of mind-controlling superintelligences and impossible moral luck, and not a nice “let’s build an AI so we can fuck catgirls all day” universe. The worst that can happen is not the extinction of humanity or something that mundane – instead, you might piss off a whole pantheon of jealous gods and have to deal with them forever, or you might notice that this has already happened and you are already being computationally pwned, or that any bad state you can imagine exists. Modal fucking realism.” -muflax

Link: http://web.archive.org/web/20141013085708/http://kruel.co/backup/Ontological%20Therapy.html

I feel like this might be the point that kept driving me away when trying to read LessWrong. The stupid excessive extrapolation that becomes idiotic at the second step, let alone the AI god step.

That guy is running so damn fast toward his nifty sci-fi cosmology that he managed to mishandle almost every concept there ever was. You can tell he has not been inside a physics classroom.

As far as I know, modal realism by way of computational idealism (?) isn’t orthodox Rationalist doctrine, but EY did write some Greg Egan mega-crossover fanfiction based on that idea (it feels like their most original-seeming ideas are always stolen from like Greg Egan or Douglas Hofstadter or whichever writer). You’d think he’d be less worried about existential risk if he believed it? IDK.

I never got really deep into the FDT/UDT lore even though “bash every philosophical problem with information theory” is probably the only strand of Rationalist thought that I find genuinely interesting.

I also find it amusing that when this was written, modal realism seemed like a deeply weird idea that you need a huge brain to take seriously, but now we have TikTok teens with their reality shifting.

> I never got really deep into the FDT/UDT lore even though "bash every philosophical problem with information theory" is probably the only strand of Rationalist thought that I find genuinely interesting. TBH they're just really bad at math. Expected utilities that don't converge, utility "approximation" by simply summing what ever "hypotheses" happen to be around with no regard for avoiding biased selection of terms, etc. Sort of a cargo cult of thinking like a really stupid 1960s idea of an AI. Then there's the whole thing with them not understanding that e.g. Solomonoff induction works by having each function output a string beginning with the given prefix (previous observations), as opposed to Bostrom style bullshit where a so called hypothesis is required to merely explain your existence by containing you inside of it, and it can contain a large number of copies of you. Which obviously doesn't work if the hypotheses are something like Turing machine tapes (or lambda calculus expressions or the like), because the maximum number of anything "inside" such hypotheses grows faster than any computable function of their length. It also fails for another reason, which is that you can always have a really simple hypothesis where the match to you arises randomly, which is also an issue that had been obvious long before any of that was formalized (Boltzmann Brains).
What's FDT/UDT, and what Bostrom work involving a "hypothesis" containing "copies of you" are you referring to? Is this related to [Tegmark's mathematical universe hypothesis](https://en.wikipedia.org/wiki/Mathematical_universe_hypothesis), or to [Juergen Schmidhuber's ideas](https://people.idsia.ch//~juergen/computeruniverse.html) about all computable universes existing?
Nick Bostrom (a very rationalist adjacent philosopher) has a simulation argument and a bunch of variations like doomsday argument, all reliant on privileging a model with more people in it. Leads to all sorts of crazy results, which he keeps within the realm of “normal” scifi and clickbait by not exploring that idea too far. As for the two folks you’re thinking of, i think they’re well aware of difficulties with convergence. Rationalists are a very specific phenomenon centred around Yudkowsky, a folk philosopher with a severe case of Dunning-Kruger when it comes to math. FDT, UDT and TDT and such you’ll have to look up to read more about, but they are poorly formulated decision theories that would require a frigging halting oracle for some variety of “agents finding instances of themselves” within arbitrary programs. Then the actions are applied as if in control of every instance found. Rationalist stuff is not really dependent on modal realism, although it does make things worse. Without modal realism you still have insane expected utility sums that do not converge. Ultimately the big picture goal of rationalism has always been to teach people what ever ways of thinking would make them give money to Yudkowsky, mostly accomplished via a really bad estimate of expected utility of doing so. As such expected utility is the same whether other words merely have a probability or actually exist and are weighted by the same number as the probability.
> Nick Bostrom (a very rationalist adjacent philosopher) has a simulation argument and a bunch of variations like doomsday argument, all reliant on privileging a model with more people in it. Leads to all sorts of crazy results, which he keeps within the realm of “normal” scifi and clickbait by not exploring that idea too far. Yes, I've read a bunch of Bostrom's writings on his way of thinking about anthropic reasoning (the 'self-sampling assumption') and on the simulation argument; I was trying to get at the question of which specific claims of Bostrom's you were referring to when you said "Bostrom style bullshit where a so called hypothesis is required to merely explain your existence by containing you inside of it, and it can contain a large number of copies of you". Was that about the simulation argument specifically, or the self-sampling principle generally, or something else? I think Bostrom's self-sampling assumption is basically just a more systematic presentation of the type of anthropic reasoning that physicists sometimes use in cosmology (especially multiverse scenarios like the string theory landscape, see [here](https://www.wired.com/2014/11/check-universe-exist/) for example), and to some extent in astrobiology as well (some examples [here](http://philsci-archive.pitt.edu/12553/1/Anthropic_MMC_v4.pdf)). Of course there is plenty of debate about the validity of anthropic reasoning in general, but I don't think Bostrom's self-sampling assumption goes out on any limbs that might be seen as implausible even to those who *do* accept anthropic arguments. The simulation argument makes a bunch of other assumptions beyond the self-sampling assumption, so certainly one could accept the plausibility of the latter but not the former. But as I said, I was wondering whether your criticism was specifically about the simulation argument or something broader. Also on your comment that Bostrom's arguments are "all reliant on privileging a model with more people in it", it's worth noting that he specifies self-sampling only applies to real individuals in whatever picture of the universe or multiverse you're assuming, not to possible individuals in different models of the universe/multiverse. See his discussion of the "presumptuous philosopher" on page 9 of [this paper](https://www.anthropic-principle.com/preprints/mys/mysteries.pdf) where he imagines two scientific models that each predict a unique universe (not a multiverse), but one type of universe would be expected to give rise to a trillion times more individuals than the other--he says in this case it would be invalid to use the self-sampling assumption to predict we're more likely to be living in the "bigger" type of universe. On the other hand, if we had a multiverse model where there were many actually existing universes of each type (and many actually existing observers living in each), with each type occurring about equally often from an objective point of view, then in that case I think his self-sampling assumption would say you're a trillion times more likely to find yourself in the bigger type of universe. >FDT, UDT and TDT and such you’ll have to look up to read more about, but they are poorly formulated decision theories that would require a frigging halting oracle for some variety of “agents finding instances of themselves” within arbitrary programs. Then the actions are applied as if in control of every instance found. I was mostly just asking what the acronyms stood for, but your mention of decision theory clued me about what the DTs stand for, and after some quick searching I presume that UDT is something called [updateless decision theory](https://www.lesswrong.com/tag/updateless-decision-theory) invented by the transhumanist (and LessWrong poster) [Wei Dai](http://www.weidai.com), and TDT and FDT are Yudkowsky's own [timeless decision theory](http://intelligence.org/files/TDT.pdf) and [functional decision theory](https://www.reddit.com/r/SneerClub/comments/grouea/what_are_the_problems_with_functional_decision/). I don't know what Wei Dai's arguments for the advantage of UDT over existing decision theories would be, but in Yudkowsky's case I gather one of the main motivations is to get better answers in thought-experiments like [Newcomb's paradox](https://en.wikipedia.org/wiki/Newcomb%27s_paradox), imagining that entity controlling the setup has already run a bunch of detailed simulations of your mind. I once wrote up a [long effortpost](https://www.reddit.com/r/SneerClub/comments/grouea/what_are_the_problems_with_functional_decision/fsw8e84/) making the argument that even if causal decision theory is a bad guide to action in these kinds of thought-experiments, Yudkowsky's decision theories seem to have no practical advantages over the existing [evidential decision theory](https://en.wikipedia.org/wiki/Evidential_decision_theory) that's been around since the 1980s. >Ultimately the big picture goal of rationalism has always been to teach people what ever ways of thinking would make them give money to Yudkowsky, mostly accomplished via a really bad estimate of expected utility of doing so. Yeah, the rationalist community as a whole (not necessarily counting more 'rationalist adjacent' people like Bostrom) seems to dogmatically adopt a lot of very debatable ideas, like a particular flavor of utilitarianism that seems to lead to the [repugnant conclusion](https://plato.stanford.edu/entries/repugnant-conclusion/) (or things like [Yudkowsky's argument](https://www.lesswrong.com/posts/3wYTFWY3LKQCnAptN/torture-vs-dust-specks) that it would be a good tradeoff to torture someone for 50 years if it would save a large enough number of people from brief moments of irritation at a speck of dust that got in their eye). I also noticed that when the moral dilemma arose of Yudkosky's buddy Scott Siskind having some old emails leaked revealing his support for HBD (along with his feeling that there was a lot of value in neo-reactionary thought), Yudkowsky completely dropped the idea of a utilitarian evaluation of the pros and cons of releasing information that might be relevant to the public (or the dangers of right-wing radicalization in internet communities), and fell back on an extremely deontological perspective where anyone who doesn't have an absolute moral stricture against releasing private correspondences must be ["evil"](https://www.facebook.com/yudkowsky/posts/10159408250519228).
Well, with Bostrom's simulation argument, for example, we have the following two classes of models: a: The world around me is the actual physical universe. b: The world around me is a simulation within a computer (there's another level of "around me"). The latter gets privileged by the number of people in it. These are two very different models. They don't even share the same laws of physics, if by the laws of physics I refer to the laws governing the motion of my specific shoe dropped on the floor, and if we assume that the simulations are in some way non exact, or are exact but allow meddling with (which would constitute an extra law of physics). I think the universal prior and Solomonoff induction are very illuminating with regards to this argument. From that perspective, if indeed in the future there will be 1024 simulations of you, that would be 1024 different models, each of them presumably something like 10 bit longer for having to choose one simulation out of 1024 as the one that is you (as the one whose observations will appear on the output tape), and thus each correspondingly 1/1024 less probable. So the number of simulations of you has no impact at all. This seems weird when imagining it from a hypothetical god's eye view over the whole history, with god placing souls somewhere and somewhen, of course. Halfway atheism leads to all sorts of weird outcomes, when there's no god but there's still a god's eye view and souls which aren't placed by god so they are placed by some mathematical law. If we examine the discrepancy, we see that the issue arises any time we merely require a model of the world to contain you somewhere inside of it, without locating you. A hypothesis with no gravity and just gas would do just fine; somewhere inside of it there would be a Boltzmann brain with all the right memories, believing that he's you, right before dissipating. Of course, if you require to actually locate that brain, that would make it into an extremely complex hypothesis, for having to encode a very large offset. I'm personally convinced that there's something incoherent about how we accept a theory as an *explanation*, which is absent when we require a predictive model. Edit: and of course, rationalists are the first to namedrop Kolmogorov and Solomonoff. And yet they never adopt the idea that theories should match observations and be predictive. re: evidential decision theory, completely agreed. The counter argument, the smoking lesion, is particularly funny with regards to what we know today, which is that you shouldn't take on faith the causal models that other agents (e.g. tobacco companies) tell you you should use. edit: and the tobacco industry’s explanation of cancer was not even logically consistent with there being an agent that is following a decision theory of some kind; instead the lesion is causing the decision.
>If we examine the discrepancy, we see that the issue arises any time we merely require a model of the world to contain you somewhere inside of it, without locating you. A hypothesis with no gravity and just gas would do just fine; somewhere inside of it there would be a Boltzmann brain with all the right memories, believing that he's you, right before dissipating. Of course, if you require to actually locate that brain, that would make it into an extremely complex hypothesis, for having to encode a very large offset. Doesn't this have a weird side-effect of privileging models with "easy-to-locate" minds (however that would work) in them? And by extension, models that include anything else involving "locating minds" (e.g. telepathy) because you'd have to do that anyway? Still, it makes more sense than the idea of looking for copies of yourself inside infinite hypothetical universe-generating algorithms... (Doesn't current physics contain continuous quantities and is therefore incompatible with a digital physics metaphysics anyway?)
Yeah, could be, digital physics is certainly way too weird. On the other hand it never promises to be anything other than a model of the world; declaring that a mathematical description is literally the world, isn't exactly a justifiable point as it is, and far less so when you're considering an incredibly awkward way of constructing those, which requires immense complexity for representing even historically early theories of physics, and which is extremely biased towards models lacking rotational invariance. Maybe it would be less of an issue with continuous physics with all the exact symmetries and invariance, where the world can be centered on the observations being predicted (with some variation of Mach principle, you can center the model where ever you want). Then it is unclear what many instances do to a probability, but it seems to me that many instances within a simulation still shouldn't make it a more probable model. If in the future, there's a billion perfectly identical copies of you, which copy are you in? Well you're in all of them, objectively speaking, by definition they're the same; and subjectively speaking you are where you are and the question is what is around you and the probabilities are a question of choice of some sensible prior, whereby extravagantly over complicated hypotheses are heavily discounted. There's also another way to look at it, which seems to me to be the most promising (or, at least, non promising of nonsense). What are we doing when we are trying to anticipate the world and predict results of our actions? We are constructing something within a little part of the world, to predict the rest; be it a thought, a manual calculation on the piece of paper, a scaled down airplane wing in a wind tunnel, a digital simulation on a computer, or a quantum program. We never have to, nor are able to, enumerate all those endless possible programs anyway. The reason we can model the universe at all is because we have a small piece of it that we can use; if there's other possible universes I'd assume almost all of them we can't model at all because we are not inside of them and don't get to use their weirdness for their equivalent of "computing". edit: as applied to "rational" agents, obviously anyone or anything that's as smart as humans, should be able to e.g. use a wind tunnel to build an airplane. Which is rather curious because here you actually are not able to make a prediction of the airflow on the airplane, or in the wind tunnel, but you are able to make a prediction that those two will be close. That is not really captured by the notion of an agent simply simulating the world.
the "copies" thing was all inspired by https://en.wikipedia.org/wiki/Permutation_City
The idea of some sort of branching of identity when multiple copies of you are being created goes back further than that, I'd say it's present in the [Everett interpretation of quantum physics](https://plato.stanford.edu/entries/qm-everett/) for example (Everett himself [apparently believed](https://space.mit.edu/home/tegmark/everett/everett.html#e23) in a version of the [subjective quantum immortality](https://en.wikipedia.org/wiki/Quantum_suicide_and_immortality) idea which is relevant to the plot of *Permutation City*).

Listening to these people, I can almost understand what caused the neo-reactionaries to show up in these spaces. You brainblast yourself on AI threat all day, and so it’s no wonder they want to burn all the processor boards and return to monke.

Well, except for the ones who want to make the Basilisk and have it send all feminists to hell for them.

let’s build an AI so we can fuck catgirls all day

Big joke haha, but it says something in itself. The nearest thing to a rationalist moral consideration is self-concerned, i.e. what if the AGI tortures “me?” Wrong locus. What if consciousness for the AGI approximates torture, as it seems to generally?

Or to ground this a bit – existence as given – what if the catgirls don’t want to fuck you?

should you flair this NSFW?

Isn’t this Orion’s Arm shit with 16 posthuman gods

The AI Gods in Orion's Arm are just early 2000s meme ideologies mapped to the Kabbalah for some reason. This is more like every possible god in the multiverse is simultaneously threatening you with Roko's Basilisk / Pascal's Wager logic and you're having a mental breakdown over it.

I don’t normally like to opine on other people’s mental health, no matter now eccentric their behavior. But man, if OP is going to flat-out tell me they were having a psychotic break, all I can say is that I hope they’re in a better place now.

P.S. - remember that you’re not alone! the next time you dwell on the agony of being embedded in the substanceless procession of natural law, go back to the granddaddy of ontological suffering prevention and try buddhism on for size

If you are talking about Kruel I think he was some variety of alt right and very concerned about Syrian refugees, last I checked. Been a while, so he could well be any ideology now.