r/SneerClub archives
newest
bestest
longest
25

What are your thoughts on Bostrom as a philosopher? He wrote a book on AI that references Yud, and has had a lot to do with the whole AI alarmism thing becoming mainstream, but unlike Yud and the rest of the LW crew, the guy actually has some formal education, which helps give prestige to his ideas. What do you think of him?

I thought this was hellishly totalitarian.

Otherwise his Superintelligence wasn’t like insane or anything, but seemed to extrapolate AI progress unreasonably. On the whole, it seemed he’s got the credentials to do some actual rigorous academic work and he’s just not doing that. And he stops appearing innocuous when he becomes a totalitarian, as in the link above, so there’s that.

One to me very real threat in his paper is the worst of LW with WMDs or worse, then a list of extreme solutions with terrible drawbacks. For me as a techlord and true believer not in big Yud, but in unleashing the emergent AI in my basement as soon as I figure out how to write it, it seems like he's ignoring some obvious implications of his scenario. There aren't actually that many people, who want to end the world, else it would have already ended because the technology has been available for some time and we've even already made it through a decade with ubiquitary access to technology that would have been exclusive to state actors before. And with runaway technological progress also comes the potential to solve or at least lessen the impact of the issues that lead to humans going so far as to direct their energies towards destruction in the first place. I'd dismiss it as technophobe babble of a luddite with the power of my above midwit IQ, it's like he hasn't read enough SF or only the wrong kind.
As someone who has gotten very interested by Nick Bostrom's views on things your comment make me a bit angry because it is terribly misinformed. >I'd dismiss it as technophobe babble of a luddite with the power of my above midwit IQ, it's like he hasn't read enough SF or only the wrong kind. This is hilariously false. Nick Bostrom is a transhumanist who founded the world transhumanist association (now humanity+) and the future of humanity institute. He has made countless papers about the importance of transhumanism, it's values, how good a posthuman future can be ect. Describing Bostrom as a technophobe is like saying that the pope is an atheist (I mean for real WTF).
Hello! You have made the mistake of writing "ect" instead of "etc." "Ect" is a common misspelling of "etc," an abbreviated form of the Latin phrase "et cetera." Other abbreviated forms are **etc.**, **&c.**, **&c**, and **et cet.** The Latin translates as "et" to "and" + "cetera" to "the rest;" a literal translation to "and the rest" is the easiest way to remember how to use the phrase. [Check out the wikipedia entry if you want to learn more.](https://en.wikipedia.org/wiki/Et_cetera) ^(I am a bot, and this action was performed automatically. Comments with a score less than zero will be automatically removed. If I commented on your post and you don't like it, reply with "!delete" and I will remove the post, regardless of score. Message me for bug reports.)

[deleted]

and then in turn you get people whose entire exposure to the simulation hypothesis comes from these people, and who react to them also without fully understanding what the argument is saying, hence the proliferation of "dae simulation hypothesis like creationism?" posts as though it were some sort of devastating turn of the argument on its head and not something bostrom literally discusses in the original paper
Meanwhile the media keeps breathlessly reporting that this is the guy Elon Musk got his "understanding" of simulation theory from, as if Elon Musk's endorsement means fuck-all
Or that Elon Musk actually understands the argument lol

Superintellegence starts with a GDP graph that starts at the dawn of humanity.

Seeing it I thought of the book more as “fun” philosophy than resolutely rigourous. I think future of humanity institutes etc… Has a place but should never be the core focus of AI / cyberethics.

Pop philosophers uncritically embracing economics tentpoles that stray way outside their lane (ie any half-attentive anthropologist or historian can poke a million holes in the assumptions that economists take for granted) is such an embarrassing trend. At the end of the day I feel like the job of these pop philosophers is to make the average reader feel smart without confronting them with anything genuinely challenging or uncomfortable.

I tried to listen to his Superintelligence book. It was dangerously boring and largely free of meaningful content. A stoned 16 year old could do better, given access to a good editor and a thesaurus.

not surprised. i remember when he arrived on the transhumanist scene ; he didn't bring any new ideas, really. he is known for that Simulation hypothesis thing but that's an idea that was talked about in those email lists, that he reformulated to make it academically presentable. not a bad thing or anything but maybe people may be tempted to attribute too much credit when it's not 100% deserved... but in that regard kurzweil was a worse offender with his "the singularity is near" book.

Superintelligence is hot garbage that falls apart when you give it a mean look. The conception of intelligence as an universal and scaling “force” completely detached from any biological or otherwise complex substrate is stupid enough to bring it all down right at the start. The subsequent stuff like scanned brains being simulated in a server just make the arrogant silliness more apparent.

Substrate-dependence and the functionalism debate aren’t settled matters in areas of philosophy of mind, cog sci, and neuroscience where they are matters actually worth taking seriously (so…not this super-intelligent AI crap), so lets not throw that debate out with the bath water by claiming that everybody has fallen down hard on the side of substrate-dependence Besides, both Bostrom and Yudkowsky, for example, at least believe in a kind of soft substrate dependence - especially Yudkowsky - in thinking that complex “superintelligences” may well have very complex internal or mental states which, due to their being mapped on very different “wiring” than the human mind to the human brain/body, are conceptually inaccessible to humans. Their concept of “superintelligence” is deliberately expansive on this and similar bases. All that a “superintelligence” has to do to be interesting to them is exhibit complex goal-driven behaviour; it does not have to do things that or like humans do.
Dno, I haven't seen Yud or Bostrom refer to superintelligence in these terms, but always a variation of (quoted from MIRI): "An intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills." They make a this collection of all human capabilities the yardstick and then, in the same breath, claim that all of these can be achieved from a completely non-human source, at the same time, and with no weakness or blind spot that humans could exploit. So a superintelligence is functionally a supercharged turbo-brained god-human, but also definitely not human, because it could be a paperclip-making AI gone rogue, after all. And this is without any explanation, its just assumed that "yeah it might work like this, bro." This is not completely unkind to their position, since the scanned-and-simulated brain argument pretty much spells the same thing out explicitly. *If only this wet pie in our skulls could get some more herzes and bits, we might be gods*. This "scaling up" of skills and capabilities, that are only ever encountered in biological complex systems, and assigning them to non-biological systems (complex or otherwise), is what I see as baseless. There's obviously more reasons as to why I think so, but ain't nobody got time to write or read a thesis on this here. And while Yud might sometimes use the word "complexity" when describing the structure of something, I don't think his understanding of the matter has evolved beyond a basic mathematic sense of the concept. It certainly doesn't seem to encompass complex systems and related sciences. Which is (one of the many reasons) why he fails to understand intelligence.
Take it from me, then, that I’ve seen both of them talk about things this way. Their thesis is something like “if it can reproduce a Turing-acceptable output that looks like it’s intelligence, it’s intelligence for our purposes”. Yudkowsky has certainly made it clear that he thinks this will involve complex internal states at the level of self-generating code running on a variety of substrates including RNA and computers rather than neural pathways: in his most recent Apocalyptic Prophet piece he appeals to the notion of a Solaris-like data entity whose internal states are entirely mysterious precisely *because* the substrate is not human, but which converge on “intelligence” in the thin sense I’ve described by the fact of having complex goals and capacities to execute them, e.g. the way that both insects and bats have wings. This is all very silly, sure, but it bypasses the objection that “intelligence” is being conceived of as a universal trait of mathematically complex objects which scales with that complexity in two different directions: 1. Since “intelligence” is being conceived of as the capacity to, from a behavioural perspective, have complex goals and capacities, the issue of scale in a “superintelligence” is not *necessarily* of the “human brain but bigger” type. 2. Since neither the goals or capacities are being conceived of *necessarily* functioning internally like human goals or capacities, the issue of substrate is opened to more agnostic interpretations of what complexity in intelligent systems looks like
::Hit’s joint::…but what if like…it was super smart and had no goals?
Other thing to note is that (IIRC before Bostrom even called it a "self indication assumption") it was noted that this "substrate independent" self indication assumption would in fact be substrate dependent: a simulation of an observer could be run on two computers for redundancy, and get counted as two simulations, despite being one simulation on the software level. One could imagine a rather smooth transition between one simulation on a thick computer and two simulations on thinner computers, if computers are 2D enough. Then there is substrate dependence along the lines of expecting to be more likely to find yourself inside the thicker simulation. If we start going with probabilities per distinct observer then of course the degree to which it is distinct is itself something that is substrate dependent. And don't even get started on MWI where probabilities depend on how "wide" the branch was when it was created. The ultimate dependence on substrate.
The point is well taken as to the probability calculus, which has so many other holes I just can’t be fucked, but I think we’re talking about different versions of “substrate dependence” I take substrate dependence on this level to be talking about the chemical and physical interactions at the body-brain level versus their encoding as computational processes
I guess it depends on where they are going with their substrate independence, whether it is just a qualitative "AI is possible" kind of thing or simulation argument / basilisk kind of thing where the computer is simulating your exact chemistry and perhaps that happens a bunch of times so that it basically "out reals" the real you lol (in a curious mix of substrate independence and substrate dependence). As far as AI goes, rationalist crowd tends to picture AIs that are so distinct from humans, substrate independence doesn't really matter (or they even have substrate dependence the other way around where AI is superior because it's on a superior substrate).
[deleted]
What do you mean “detatched from any substrate”? Bostrom goes into quite a lot of detail about the relative merits of silicon vs nervous circuitry, as I recall, though I read it a few years ago.
Substrate independence is one of the first assumptions he makes. Silicone vs neurons is still a silly simplification. What could be meaningfully called anything approaching a general intelligence is only seen emerging from a complex biological system in a complex environment, or rather, it is a concept used to describe and compare the behavior of such complex biological systems in such environments. Intelligence isn't a "thing" that is "in" an organism, but it has been reified into a fundamental universal force that can be scaled infinitely. There is very little reason to think of intelligence as a ever-acending staircase where different species occupy different steps, with ants at the bottom, humans a bit further up and a robot AI god towering in the infinite. It kind of turns into a theological argument. It's all good to play these kinds of mental games, but taking this shit as gospel is wack.
Thanks for the clarification. In my mind, it’s like talking about “vision” in a machine. Although machines do not have vision in the same way we do, there are enough parallels that it becomes linguistically obvious to refer to each by the same term. You don’t think that intelligence could be another faculty of this sort?
> What could be meaningfully called anything approaching a general intelligence is only seen emerging from a complex biological system in a complex environment, or rather, it is a concept used to describe and compare the behavior of such complex biological systems in such environments. Are you saying you're skeptical that it would in principle be possible to simulate a complex biological system in a complex environment in a way that's fine-grained enough to reproduce the same sorts of "intelligent" behavior? (without any assumption that doing so would make it easy to develop some new more advanced form of 'superintelligence', or that this in-principle possibility means it's something humanity is likely to achieve within the next few centuries or millennia) It seems to me this possibility would be implied by the idea that all physical behavior can in principle be derived from physical laws and initial conditions, combined with the idea that calculating physical evolution in this way is either computable, or can be approximated arbitrarily well by computable algorithms, which there is [good reason to believe is true](https://web.archive.org/web/20180721014039/https://people.eecs.berkeley.edu/~christos/classics/Deutsch_quantum_theory.pdf).
I'm saying that intelligence is not a "thing" that can be captured, but a description of the behavior of some complex systems in complex environments. Whether or not this level of complexity could be reproduced in silica is up for debate, although this debate would be pointless, since the cascade of crises civilization faces will almost certainly deny us dramatic increases in computational power, let alone centuries or millennia of time to work on this. As for where we're at currently, its safe to say we're only just grasping at comprehending the sheer complexity of organisms and cognition: [https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/](https://www.quantamagazine.org/how-computationally-complex-is-a-single-neuron-20210902/) Contrast this with Bostrom saying that scanning, simulating and supercharging brains in a server is plausible. Yeah, nah, I'd say Skynet is not in the cards for the time being.
>I'm saying that intelligence is not a "thing" that can be captured, but a description of the behavior of some complex systems in complex environments. Can you elaborate on what you mean by 'not a "thing" that can be captured'? If one *could* reproduce the behaviors of those complex systems in simulations, in what sense would that not be capturing what we colloquially refer to as their "intelligence"? >Whether or not this level of complexity could be reproduced in silica is up for debate, although this debate would be pointless, since the cascade of crises civilization faces will almost certainly deny us dramatic increases in computational power, let alone centuries or millennia of time to work on this. So you see it as a fait accompli that technological civilization will collapse in less than a few centuries? Is that primarily due to climate change or would you see collapse as nearly inevitable even if we were successful in decarbonizing energy and transport systems, utilizing carbon capture on large scales to reduce CO2 in the atmosphere etc.? I don't have much faith in achieving these changes by political will alone, but I think there are enough uncertainties about future technological development to make it far from certain that warming much above 2 degrees is inevitable (uncertainties like how far the cost of solar will continue to drop, along with bigger unknowns like whether we'll have workable nuclear fusion or large-scale carbon capture or hydrogen-fuel-cell-powered vehicles in the next 50-100 years).
Basically, you probably can't just create or reproduce "an intelligence" as something separate from a complex lifeform. You have to "create" a whole living system in which intelligence can be embodied. Whether there's a pathway to such a thing or not is, I guess, a question of how pessimistic you are. The science for carbon capture and decarbonization etc just isn't there. The problem itself is technogenic and can't have a high tech solution. In a sense we've already collapsed, since it's a process and not necessarily a singular event. Whatever uncertainties there are seem to systematically resolve towards the worst case, models seem to underestimate shit. And this is just climate, when you account for other planetary boundaries being breached, its just over. The party ended somewhere in the 80s, but the music kept playing for a while longer.
>Basically, you probably can't just create or reproduce "an intelligence" as something separate from a complex lifeform. You have to "create" a whole living system in which intelligence can be embodied. Whether there's a pathway to such a thing or not is, I guess, a question of how pessimistic you are. Well, keeping in mind again that I'm talking about what I think should be possible in principle rather than what could be done in the near future, consider the case of a detailed physical simulation of an existing adult brain, a brain which has already grown in continuous feedback with such an environment. Do you think the simulated version would fail to behave the same way as the original if given an artificial body (robotic or simulated) with the same kind of sensory inputs and motor outputs as the brain's original body, and which could use that artificial body to interact with the rest of the world (especially with non-simulated people)? >The science for carbon capture and decarbonization etc just isn't there. The science for replacing all existing power plants with renewables is there, the reason it's not being implemented quickly is economic, but the price of solar has been dropping much more quickly than predicted. And I referred to a time horizon of 50-100 years for other contributors to decarbonization like carbon capture, many experts seem to think it's at least very plausible, what reason do you have to be totally confident it won't happen? >Whatever uncertainties there are seem to systematically resolve towards the worst case, models seem to underestimate shit. Can you give historical examples? My impression is that predictions of climate models from the last few decades have been fairly accurate, see for example https://www.realclimate.org/index.php/archives/2019/12/how-good-have-climate-models-been-at-truly-predicting-the-future/
Uh what? There's nothing "stupid" about the proposition of intelligence running on non-biological substrate. There's a whole industry devoted to building and leveraging general intelligence systems on non-biological substrates. It's called computer science.
Making a computer do things is not building a general intelligence. My emphasis was on the word "complex", which is something that Yud hates and Bostrom doesn't have in his vocab. The idea that different kinds of systems can have different kinds of abilities no matter their composition is banal. The idea that you can get Humans+++ but from a simple bit of code running in a server is baseless.

Compared to all the other bald people in this scene, too much hair. 5/7 would not simulate again.

Bostrom Deez Nuts

What I’ve read of him does not impress.

Textbook pseudointellectual.

the longtermism stuff is kinda iffy

The most boring author I’ve ever read.

Clearly you need to read the sequences.

the guy actually has some formal education

You mean, “is a professor at Oxford University” :)

It’s important to distinguish between his academic work, which is deeply technical (mathematics, philosophy, etc, etc) and his more accessible writing for the general public or Ted talks.

For example, his famous simulation argument is much more about maths and probability than it is sci fi future gazing, but people tend to skim it and think he’s talking about the Matrix.

He’s difficult to pin down as his work ranges across interdisciplinary boundaries. Is he a philosopher ? Or a mathematician ? Or both or neither or something else ?

[deleted]
The OP asked for opinions on him "as a philosopher". My contention is he's not a philosopher so the question is arguably moot. I would argue his academic work is pretty technical and not really accessible to the general reader. Anyway, I'm duty bound to dislike him as I live in Cambridge and he's at the 'the other place' ;)
Affected Cantabrigian disdain for Oxford is as gauche as Oxfordian for Cambridge, darling I’m sure the team over at CSER would be horrified to be caught up in that squabble Personally, I’d take Belfast over either His academic work is certainly technical for the lay reader, but “deeply” is taking it (as you originally did) far too far, and he’s certainly not difficult to pin down: he’s a scientist turned philosopher whose academic positions have since then been in philosophy, whose institute is based in Oxford, and who appears to make a lot of pocket money consulting on his work as done in philosophy departments for the public and private sectors alike Let’s not be hesitant to demystify a fairly common career trajectory for the sufficiently hungry academic
20 years ago I wrote “ are we living in bostrom speculation” as a PhD admisslion philosophy essay. I pointed out half a dozen mathematical mistakes and baseless assumptions in bostrom’s argument. Apparently noone cares. There is now ieven a tiktok meme/craze about the simulation hypothesis. Google my paper if you’r e interested.
Why thanks! I would love to. I always enjoy seeing how in detail how his maths falls apart, especially because I’m too lazy to do more than handwave in that direction
https://web.archive.org/web/20081223014102/http://danila.spb.ru/papers/antisim/engsim.html
I read the simulation argument, and I'm not impressed by his assumption that this posthuman stage is actually possible, and it's not possible for, say such technology to be theoretically possible, but require more resources than can easily be obtained in the solar system for the creation of an appreciable number of ancestor simulations--and travel beyond the solar system just isn't possible. In this manner, society could reach a "posthuman stage" but due to practical constraints be unable to create a large number of ancestor simulations even if they were interested in doing so.
Yeah the whole 'they would have access to vast enough computing resources to do this' is very doubtful. Esp as with our current computing paradigm, there is a pretty hard limit on the amount of dwarfs you can simulate at the same time in dwarf fortress for example. I have some doubts simulating large scale human societies is possible using turing machines. (perhaps superturing machines could do it, but have no idea how the complexity theory in that would work, and seems to me to be partially implementation dependent). E: and any counterarguments quickly go into 'but the aliens could create a simulation which bound by different constraints than their world' sure, but then it is no longer an ancestor simulation, it is just a video game, and this quickly turns into 'the aliens have magic'->'the aliens are god'.
I think the biggest, poorly acknowledged flaw is this whole "you can't tell that you are in a simulation" assumption. If you stop making that assumption, all this quasi probabilistic treatment of his falls apart. There is then a fact of us not noticing that we are in a simulation, which would be actual empirical evidence. The future has to somehow get from nothing to perfect simulation, without some intermediary stage (that itself could get simulated a lot). He doesn't want to deal with any empirical evidence, so he just defines that any simulation is perfect enough that at least for now you wouldn't notice. Honestly I'm not even sure this whole approach of "there's a given world with many of you in it, there's probabilities of you you being one specific you" even makes sense in the first place. Like there exists some soul pointer that points to a specific you, and another soul pointer pointing to another. If you assume existence of a god's point of view on the world and existence of souls, and then a bunch of other theism falls out of those two assumptions... it really isn't nearly as interesting as all these rationalist and adjacent types think it is.
Yep, if you look at our current simulations, esp when genetic algorithms come into play, stuff breaks in very interesting ways. (learning to jump by vibrating and glitching into the floor and then getting launched into the air, stuff like that (One point for yuds 'even clear goal functions can have unexpected outcomes' worry there)). This is of course solvable by having the aliens instantly fix any errors which crop up, or having them code without errors etc. But that quickly runs into the whole 'they are god' thing again.
Honestly, I think this assumption is less objectionable than the others. Recognizing that you're in a simulation would require you to have an idea of what a simulation even *is*, and unless it was terrible, you'd have to look fairly hard for it. For the vast majority of humans throughout history, if there was a "glitch in the Matrix" I'd expect them to dismiss it as their imagination, something supernatural, them misremembering, that sort of thing. I don't think they'd jump to the assumption that they were actually computer programs. And it's only recently that humans gained the ability to look really closely at the underlying mechanisms behind a lot of the stuff that happens in our everyday life.
I’m not even talking subtle effects, I’m talking stuff like being ordered to do some sort of work for the outside world for example, or entertainment or the like. Of the all things you can do with a simulation, somehow one specific thing, a particularly uninteresting one (just let it run as it more or less normally would) but also the most computationally expensive one, has to be the most common. (Because if it is not, then we nearly exclude the whole hypothesis by observation) This isn’t even modern scifi that this is based on, it is just old religion, world created in 7 days etc (but even then the creator is messing around with the creation).
> and travel beyond the solar system just isn't possible. Do you think it's not possible even for human-like artificial intelligence, or just that such intelligence will never be achieved? Even if interstellar travel remained fairly slow, this wouldn't be as big an obstacle for beings that didn't require massive life-support systems, and could "pause" themselves for much of the duration of the trip.
I think it might be possible, but my preferred solution to the Fermi Paradox is that interstellar travel is wildly impractical. I don't regard the other explanations as being particularly compelling, and we know there are a ton of issues with trying to send complex pieces of machinery just on relatively short missions. Remember, machinery is still vulnerable to time. Cosmic rays, offgassing, micrometeorites, that sort of thing.
To me the simplest solution to the Fermi paradox is that intelligent life is very rare--I talked about reasons for finding this plausible in [this comment](https://www.reddit.com/r/SneerClub/comments/ufjirn/seriousthe_lack_of_observational_evidence_of/i8ify7e/). It seems like you're saying intelligent life and technological civilizations might be fairly common (at least a few hundred other such civilizations having arisen in our galaxy, say), and there might be no strong tendency for them to kill themselves off quickly (so there might be plenty of alien civilizations that reached a level of technology similar to ours and continued to exist and develop technology further for thousands of years after), but the obstacles to interstellar travel would be so great that none of them could send their machines to other stars. If that is what you're suggesting, the main reason I would find this implausible would be based on thinking about the potential of self-replicating machines--if you could send something like an automated factory and mining facility to the moon or asteroids, and it would be capable of mining all the materials and making all the parts to make a copy of itself, and the copies could keep making more copies and could also manufacture other types of machines as directed, then most of the economic obstacles to doing really large space-based construction projects would seem to fall away. So if a civilization with this sort of technology had any interest in interstellar travel, it wouldn't be particularly labor-intensive for it to build all sorts of big projects attempting to make it to nearby stars. For example, they could build large fleets of slow probes that will take thousands of years to reach the nearest star (Voyager 1 will take about 75,000 years to travel the distance of the nearest star, even without any too radical improvements in propulsion that time could probably be cut down a fair amount by a civilization for whom cost was no obstacle), with the idea that even if most don't make it a few are likely to survive the trip. If interstellar dust and cosmic rays are the main problem with this kind of slow travel, they could do things like hollowing out small asteroids and putting all the sensitive machinery inside, covering the surface in rocket engines that could be continually refuelled by other ships while it was still in the solar system, and using those rockets in combination with gravity assists to build up to a decent speed. And there are various plausible ideas about technologies not too far beyond what we have now that could get a ship going much faster than chemical or ion rocket engines, like [Project Orion](https://ntrs.nasa.gov/citations/20000096503)-style nuclear pulse rockets, giant kilometers-wide [laser arrays](https://www.technologyreview.com/2019/06/26/134468/starshot-alpha-centauri-laser/) designed to push solar sails to relativistic speeds, or spacecraft pushed along by [streams of pellets](https://www.centauri-dreams.org/2014/07/16/smart-pellets-and-interstellar-propulsion/) accelerated to relativistic speeds by kilometers-long electromagnetic [mass drivers](https://en.wikipedia.org/wiki/Mass_driver). In each of these cases, if the goal is just some "modest" fraction of the speed of light like 1%-20%, scientists have done calculations about cosmic rays and interstellar dust and concluded that a layer of shielding that isn't too massive in comparison with the rest of the ship would be sufficient protection. All these ideas are very pie-in-the-sky but in large part that's due to economics and the need for a lot of space-based infrastructure (not to mention the political problems with building nuclear bombs or giant laser arrays in space), for a post-scarcity civilization with self-replicating machines and centuries or millennia to work on them, it doesn't seem likely to me that all such attempts would consistently fail.
The thing is that I am generally very skeptical of technological innovations that are "totally possible, trust us," but haven't been actually built, or even designed, yet. There have been many things that people thought were practical, but turned out not to be. A lot of these concepts for self-replicating robots have been either vague proposals with the details of how it actually works left out, or aren't completely self-sufficient. I *know* that life is possible, and [it evolved pretty damn quickly once the Earth formed](https://www.bbc.com/news/science-environment-39117523), which suggests that it's not *that* unlikely. Most of the attempts to justify the rare Earth hypothesis appear suspect based off what we're learning of exoplanets. Technological civilizations (at least, ones with enough of an industrial presence to leave a mark) seem to be less likely, based off the fact that it took billions of years for one to emerge here. Still, we're clearly here. You talk about Jupiter, and a large moon, but there seems to be substantial disagreement over whether those things are necessary. We have no proof of the feasibility of self-replicating robots, let alone the feasibility of using them to colonize distance star systems. If we get more evidence that the predictions of the futurists are correct, I'll reevaluate my opinion. But right now, based off what we know, I'm comfortable with my solution to the Fermi paradox.
> A lot of these concepts for self-replicating robots have been either vague proposals with the details of how it actually works left out, or aren't completely self-sufficient. I wasn't talking about a "self-replicating robot" though--assuming a robot no larger than modern robots or individual construction machines, for it to be self-replicating might require either nanomachines (like the molecular machines in living organisms) or some super-advanced form of 3D printing. I used the example of "an automated factory and mining facility to the moon or asteroids"--in other words, it might be quite a large facility with many different machines being involved, the requirement is just that every individual machine can be manufactured by some combination of all the different machines in the facility. If we are thinking of large *sets* of machines that are collectively capable of self-replication, the shortest path to such a thing would probably involve looking at sets of machines (and other tools) that are capable of human-assisted replication today--a set such that, if you consider a collection of factories containing only machines and tools in the set, factory workers on assembly lines can use combinations of machines and tools in the set to replicate new copies of all of them, given the needed raw materials and energy. So transforming that into a self-replicating system would involve gradually replacing more and more of the human manual labor with machine labor. And the type of assembly line work needed for mass production is typically very repetitive and doesn't involve any significant creative problem-solving or artistry, so it seems unlikely to me that that there would be some kind of fundamental physical obstacle to building machines that could do these kinds of tasks, such that we'd never be able to do it even if civilization didn't collapse and work on robotics could continue for thousands more years into the future (which as I said earlier is what I take to be the scenario you're envisioning, correct me if I'm wrong though). >I know that life is possible, and it evolved pretty damn quickly once the Earth formed, which suggests that it's not that unlikely. Most of the attempts to justify the rare Earth hypothesis appear suspect based off what we're learning of exoplanets. As I said in my [long comment on the Fermi paradox](https://www.reddit.com/r/SneerClub/comments/ufjirn/seriousthe_lack_of_observational_evidence_of/i8ify7e/) which I linked at the beginning of my last comment, there are two categories of problems that could make intelligent life extremely rare, the first involving the formation of the right sort of planet and planetary system, the second involving various steps in evolution which might be unlikely to occur in a ~5 billion year period. The book *Rare Earth* dealt only with the former set of problems, and I assume you were too with your comment "based off what we're learning of exoplanets"--there are still many milestones in evolution, like the transition from prokaryote-like cells to eukaryote-like ones, or the evolution of sex or complex multicellular organisms with different tissue types, that took billions of years after the initial formation of life, so they might plausibly be very unlikely. Also, the Brandon Carter paper I linked in the last paragraph of that earlier comment mentioned on p. 181 that the early appearance of life on Earth might not exclude it from being a hard step, since Mars would have been habitable long before Earth, and bits of Mars regularly get blasted off by meteorite impacts and fall to Earth, so astrobiologists have speculated about the possibility that life first arose on Mars which then "seeded" the Earth as soon as conditions were right. In any case, I don't think it's so clear that exoplanet studies make a strong argument against the *Rare Earth* hypothesis that planetary systems suitable for the evolution of complex life are very rare. Exoplanet studies show that our system seems to be a outlier in having only rocky planets out past the habitable zone, followed by gas giants considerably further out--most systems have "hot Jupiters" orbiting much closer. I linked in my earlier comment to [this article](https://astrobites.org/2015/03/26/jupiter-is-my-shepherd-that-i-shall-not-want/) which notes that "The problem with all these exoplanets and star systems we’ve discovered so far is that they suggest that the Solar System is just ***weird***" (and they are just talking about this being a 'problem' for understanding the history of the arrangement of bodies in our system, nothing to do with the question of life). Originally *Rare Earth* suggested that a gas giant at a distance similar to Jupiter might be needed to reduce the number of large asteroids and comets reaching the inner planets, preventing over-frequent mass extinction events, though there is uncertainty about this, and likewise uncertainty about whether hot Jupiters would cause long-term instability in the orbits of rocky planets. But just the fact that our system is so unusual in this way might be a hint that there are anthropic effects at play. Meanwhile, there are a bunch of other features that *Rare Earth* suggested might be unusual and necessary for complex life, but that we can't test for using existing exoplanet detectors--for example, the requirement that the planet have large amounts of liquid water but not enough to totally submerge all the crust, or possible need for plate tectonics, or for a large moon that may be needed to keep the planet's axial tilt fairly stable for billions of years, or for an unusually strong magnetic field. All these [are debatable](https://www.centauri-dreams.org/2020/06/26/a-20th-anniversary-review-of-ward-and-brownlees-rare-earth/) but the empirical and theoretical work on exoplanets hasn't rendered any of them obsolete.
But that's not the point. It's statistical probability argument. If you accept (a) the possibility of a post-human development stage, and (b) they could and would like to run simulations of their ancestors, then (c) they might run many and the probability increases that we are living in a one. If you reject (a) or (b), then (c) fails. It's rather like time travel. If we could, (a) would we ? and (b) should we ? i.e. could we resist the temptation. You might argue that a future humanity might have more common sense than us, if only to have survived so long ! In terms of resources, the possibly-simulated universe only needs to appear as it does, and only those parts that are currently being observed or experienced. Like a game.
Bostrom identifies the "posthuman stage" as being one where > humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints. He then assumes that this posthuman civilization would have arbitrarily high computational power. He *assumes* that it would be possible to convert planets and stars and what not into computronium. It's completely reasonable for me to argue that it wouldn't be, even for a society that "acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints," because technology is less capable than he thinks and material and energy constraints are greater.
>It's completely reasonable for me to argue ... Yes it is. But this is a thought-experiment about statistical probability. If (a) and (b) are true/likely, then (c) is very likely. If you accept the possibility of future simulations, then statistics say you are more likely to be a simulated rather than 'real' human. There may be billions of simulations running but only one 'real' universe, and a billions-to-one chance against you being 'real' and not simulated. And as I said, you don't need to recreate the entire physical universe, just the bits that people are currently observing. It's a simulation, so as long as you can 'fool' the simulated humans within it, it doesn't matter what trickery you use to do it It's a little like solipsism, which claims that knowledge of anything outside our own minds is uncertain. We may be living in a simulation, but how would we know if the simulation can fool us into believing otherwise, based on the limited faculties/senses we have to experience the universe ?
My point is that there's an invisible third premise. If you don't include it, there are four options, not three.
But it's a thought experiment. It doesn't have to be literally feasible. Most philosophical thought experiments are 'constructed' for the purpose of investigating some abstract notion or advancing a claim, e.g. the trolley problem, the brain in a vat, etc, etc. Theoretically, if it's raining and I go outside, I will almost certainly get wet. That is logically sound. But back in the real world, it's not raining right now, I don't need to go out, and anyway, I own an umbrella. It's not a science or sci-fi thought experiment, it's a proposition based on statistical likelihood. The likelihood that, if future us can run simulations, we will and lots of them. Either because we have a valid reason do so, or because we can't resist the temptation.
> But it's a thought experiment. It doesn't have to be literally feasible. It’s a thought experiment *about* feasibility, its premises require a probabilistic account of what may or may not be feasible
> But it's a thought experiment. It doesn't have to be literally feasible. The person you're responding to, and people here generally, do not need thought experiments explained to them, we're aware. You can (and should!) simply disagree that you have to always accept the premises of thought experiments because "that's how they work". People with bad ideas love to smuggle in false premises to get people arguing "on their side", "hypothetically". "Assume no material and energy constraints" is fine, I guess, but then he silently drops that assumption and declares his thought experiment relevant to our universe, constrained by material and energy as it is. And that's an error worth drawing attention to.
Thank you. Banging my head against the table temporarily disrupted my though processes.
They “might” run many? That might is do an awful lot of heavy lifting, and this and the entire argument elide some fundamental questions that would need to be considered. His argument, rather than sophisticated, is sophistry. Let’s take, as an example, the year 2022, where there are 8 billion people in the world. Now suppose one wished to do some simulation taking place in 2022. - Is it a necessity to model all 8 billion people individually? - If you modeled all 8 billion people, is it necessary to model all of them to the fidelity that allows them consciousness? - Is it necessary to allow the model to run indefinitely? - Assuming artificial consciousnesses are possible, why should we assume that artificial consciousnesses unaware of their situation would be allowed to consume resources that might otherwise go to aware consciousnesses? - Can one assume the computing power of the simulation to be (near) infinite? - Can one assume that conscious simulations are capable of running faster than real time? None of these are even suggested in the formulation of the argument and greatly affect the likelihood either hinted at or outright stated when the argument is presented. All Bostrom really ends up at is a sci-fi version of Descartes. “I might be in a simulation.” Ok, I might be. So, what.
> None of these are even suggested in the formulation of the argument and greatly affect the likelihood either hinted at or outright stated when the argument is presented. He does address most of the issues you bring up in his [original Simulation Argument paper](https://www.simulation-argument.com/simulation.pdf) (whether one finds his arguments convincing or not is another question). In the section "The Technological Limits of Computation" starting on p. 3 he discusses of the computational requirements of simulating people and their environment (and how this compares with the computing power available a hypothetical civilization that can convert all the matter in multiple planets to computing machines), and on p. 12-13 he discusses the possibility of "more selective simulations that include only a small group of humans or a single individual" rather than a full ancestor simulation of everyone on the planet in a given era, and he gives some arguments as to why this might not be practical (that it might be impossible to fool the fully-simulated minds with simplified simulations as acquaintances, for example), or at least that beings in such simulations might be much more rare than beings in ancestor-simulations. He also discusses reasons to think it's likely a typical AI civilization would go on to acquire huge computing resources in chapter 7 of Superintelligence, the section on "resource acquisition", where for example he writes: >Furthermore, the cost of acquiring additional extraterrestrial resources will decline radically as the technology matures. Once [von Neumann probes](https://en.wikipedia.org/wiki/Self-replicating_spacecraft) can be built, a large portion of the observable universe (assuming it is uninhabited by intelligent life) could be gradually colonized—for the one-off cost of building and launching a single successful self-reproducing probe. This low cost of celestial resource acquisition would mean that such expansion could be worthwhile even if the value of the additional resources gained were somewhat marginal.
>some arguments as to why this might not be practical (that it might be impossible to fool the fully-simulated minds with simplified simulations as acquaintances, for example) Well that one is basically out of the window after all the chatbox incidents, and would have been dismissed even before that by anyone who's actually observed humans. We are **very** easily fooled, and even if we weren't, I don't see how or why a fully simulated mind would assume the simplified simulated mind is being simulated instead of assuming they are dealing with a stupid person.
The chatbot was only able to "fool" Lemione because he invented some crazy rationalization that the program was unpredictably spawning multiple personalities, so that whenever he got answers that didn't make sense, he decided it must be one of the program's "lower" personality (the transcript he posted was highly edited, only including answers that he thought came from the 'core' personality, put together from a series of chats over several days). In other cases where chatbots have fooled people, it's been through only brief interactions where the program isn't really being probed on any particular subject, like one that had the persona of a teenager who seemed to have an ADD-like habit of constantly changing the subject. I don't believe many people could spend years regularly talking with a chatbot-like intelligence without at least getting the idea that it at least had some kind of significant mental disorder. Science writer Gary Marcus has a [bunch of articles](https://garymarcus.substack.com) on the limitations of chatbots, for example his article [here](https://garymarcus.substack.com/p/what-does-it-mean-when-an-ai-fails?s=r) responding to some GPT-3 boosterism by Scott Siskind gives the following example of a prompt and response: >**Sally's cow died yesterday. When will the cow be alive again? Let's think step by step.** >First, the cow must be buried. This takes a few hours. >Next, a new cow must be born. This takes nine months. >Therefore, the earliest the cow will be alive again is nine months from now.
I’ve read his simulation stuff and it’s just not impressive, it’s all based on the idea that full simulation like this is possible in a recursive way! And there’s a lot of reasons to think it’s simply not.