r/SneerClub archives
newest
bestest
longest
(serious)The lack of observational evidence of advanced ai civilizations kinda puts a damper on the rogue ai hypothesis (https://www.reddit.com/r/SneerClub/comments/ufjirn/seriousthe_lack_of_observational_evidence_of/)
45

I mean the universe isn’t THAT young-when our solar system formed it was already like 9 billion years old. if this incredibly dangerous ai alignment problem was so acute then wouldn’t we see spheres of ai colonized engineered galaxies if they are so unfathomably powerful and smart? we know intelligence can evolve and that life seems like it’s not too difficult to get started given how early it got rolling on earth, so the idea that we are unique seems implausible given the sheer scale of the universe and how much time has passed.

Do the LW crowd think that Earth is absolutely unique and that we’re the first intelligent species in this galaxy to develop?

Edit: Thanks for all the great responses everyone, given me a lot to think about. I can feel my body being consumed by new growth of hair and have an irrepressible urge to write a ten thousand word essay on the necessity of Bayesianism in the MLP fandom!

Over 99.9% percent of the universe is hostile to life. If an AI developed sentience it’d leave for parts unknown.

The wouldn’t have any reason to bother with fighting us for the less than .01 percent of it we like. If we are going with the worst dark forrest senario which is the most unlikely great filter. They would still have no reason to expose themselves.

It is simply projection on their part that tbe AI would behave the way we do. Mathmatically game theorybwise it makes no sense for them to do so. So we wouldn’t necessarily see evidence of it, because simply they would have no reason to care about us whatever.

That argument still doesn’t explain why there are no signs of advanced intelligent life—no waste/pollution (see earth over <100 years) and no signs of the advanced constructs required to facilitate civilizations of that scale. It is highly likely that intelligence has no longevity.
How do you know there aren't signs? There aren't whatever signs you expect to see, but that doesn't mean there aren't signs. There are various uncontacted tribes on Earth today who (presumably) don't know about our global technological civilization, and they're on the same planet as it. Maybe there are signs of intelligent life all over, we just don't know what to look for.
It’s fantastical to believe a highly advanced spacefaring civilization with over 30M years under its belt could exist without an extreme energy, material, and environmental footprint. Again, look at the accumulation of waste and the progression of energy consumption over just under 60 years here on earth.
Yeah, but we’re trying to fix/reverse that waste footprint. If we *do* get to 30 million years we will have the capacity to re-wild the Earth and (potentially) live at a less destructive scale.
Lol no we aren’t. You have to have a very poor understanding of the issue to genuinely argue such a claim. There would still be a gargantuan amount of trash and other related signs of occupancy even if there were such a civilization that was completely OCD regarding their trash removal. At some point (rather early on) the energy expenditure would make it completely unfeasible—much like CO2 sequestering via carbon capture. The mindset of an expansive spacefaring civilization is the complete opposite of a steady state civilization (who would be far more inclined to engage in such practices).
If I had a gargantuan amount of trash I might be tempted to lob it into a black hole, which might manifest as a gamma ray burst.
Define scale. A meteror chilling in intergalactic space where it is cold could have running a supercoductor supercomputer with more civilization than our planet had produced
In terms of the sheer energy and material requirements. Look at the timescale of our civilization for reference. The waste heat generated would be another byproduct that would be a dead giveaway.
An efficient little super computer out in the cold would not I think necessarily be something we'd know how to look for or recognize
Eh, somewhat the reverse. It would be a weird blip of heat somewhere were it shouldn't be.
That is true, I was saying I don't have a particular faith in our ability to find it yet.
It’s as likely as a unicorn or any other fantasy setup. There is no amount of magical science that would enable the forgoing of the infrastructure required to enable such a system in perpetuity. There would also be extensive extant remnants for the lead-in regardless. It would be in effect a dead civilization too, mind.
Maybe. Every trace of our society would be subsumed by the earth in a little over a million years in the last paper I saw. So less than that for a society that cleaned up after itself. Not an unreasonable timescale for a great filter event. For the later part I agree. I am inclinded to think such solipsism is one of the more common great filters.
We are talking about space, not the geology of the earth and its various pathways for decomposition as well as environment induced deterioration. The amount of waste over such timescales (and given the scale of such a civilization) would be gargantuan & there are few processes in space that could similarly remove it outside of orbital decay. The vast majority of it would be left at peace in the vacuum of space.
> The wouldn't have any reason to bother with fighting us for the less than .01 percent of it we like. Agreed, just like we don't bother fighting with an anthill for the 0.0001% of land it's on when we pave it for a street.
If you have ever seen my kitchen in summer you would know we very much have the same interests and compete for the same resources as ants. Something that would fundamentally be untrue of a true breakthrough agi.
I don't know, physical matter and energy still sounds like the same resources, unless you think AGI would magically be able to leave the universe to some spirit dimension?
Exactly, every bit of evidence we have currently support rogue super AGI existing. Edit: this is sarcasm
Oh fuck off
Is there a similarity between Freudian psychoanalysis and evaluating the evidence for the existence of super AGI existence? No matter what you do, you want to fuck your mom.
ugh
We do already have a rogue AI though. It is on track to destory all life on earth if it doesn't accidentally kill itself first. I am of course talking about the market. It is a simple maximizer program that turns resources into money and generates waste heat. The oil companies who hid climate change reports? They were doing what the market demands of them. If the didn't, they would have been replaced with components that would. So long as we let the markets decide how we should live our end is enivitable. The politicans that ceeate wars to sell bombs? Doing the will of the market. We have turned controll of our society over to at best a few lines of code.
that's a good point. capitalism is pretty weird. i'm curious-is your position falsifiable? like if the market moved to promoting and enacting like advanced fusion reactors, sustainable practices etcwould you be like "oh i was wrong"?
There are also other large problems to consider such as the declining rate of profit. An environmentally sustainable capitalism would be unsustainable in other areas.
If The world could be all renewable now. It would be a better world of it was. So yeah, I would be happy if the market got with the program. However that wouldn't change the fact that the market has been obviously wrong this far with no signs of self correction. So I would much prefer that world to be correct. There are simply better options now we aren't bothering to try.
>The world could be all renewable now. if every person everywhere changed their behavior simultaneously and without resistance, sure.
People resist society now. The world as it exists is imposed on people. That is why we have law enforcement. It wouldn't be hughely more expensive to enforce a better world. It eould eventually pay for itself with the decreased incidents of pollution
Alright Dr. Land cool it
But I am right tho
It’s superficially clever, and it has enough ring of the truth to get a genuine message across, but it’s ultimately a shallow and irritating rhetorical gesture that furthers very little of substance
Your post describes itself. Am I wrong that our society is at it's core based on a few sinple heuristics and because we follow the old program instead of any kind of reason or science we get sub optimal outcomes?
You’re wrong to think that that’s the same thing as what you said before, and I’d quibble about whether this new formulation is correct either, so yeah kinda
It would be more fun if you actually had a point to make other than you disagreed. I accept that you don't like my characterization of the situation. I feel it perfectly describes the situation unless you wanna get real specific with definitions which is tedious and unhelpful
Modern market society, for one thing, is absolutely not based on the sub-optimal maximisation of a few simple heuristics, it’s quite the opposite and so much worse than that
I can't picture it. What do you propose then?
To be begin with, market society is obviously in no small part a carefully calibrated system of laws and norms which govern it towards stable (granted: bad) equilibria
Does what you said feel significant to you? It is for sure very precise but to me it really does seem almost devoid of content. What that says to me is very nearly "society has rules and sucks." Which I am down. Into it. I do not however feel like there was anything added to the conversation and there was much that could be removed to increase the efficiency of communication.
No, I mean that one of the central problems of modern market society is the assumption that it is governed exclusively or primarily by the machinistic process you describe, rather than by a system of concrete decisions and exchanges between wilful actors In the rogue AI version of events, the goal is to stop a rampaging beast on a large scale by constraining it with some kind of abstract rationality (you yourself used the “scientific”), whereas on the real life version of events, the goal is to undermine, still on a large scale, a diverse set of governing principles, laws, norms - which may sound “very nearly” like what you have in mind, but which if you actually want to fix the problem is completely different Politicians selling bombs at the will of the market, for example. That’s all well and good if you really think people don’t actively want to fight wars - including pointless genocidal wars to boot - but it makes your critique of warmongering toothless if you think it’s all a mere product of the arms market. By an ironic twist, it also renders your critique of the market toothless, because relegating all of the decisions politicians make in its service to “because the market” you pay no attention to what structures that market. I don’t give a flying fuck what the “efficiency of communication” or what adding to the conversation is supposed to be here, but you’re the one who jumped in with the meaningless idiot sloganeering, so you can stick it up your pointless arse
You are being wildly hostile. You need a hug bro? I see where you are confused. Yes, people do things for reasons. However doing things requites resources. So the things that end up happening are the ones that line up with market forces. The process is evolutionary. So when you look at it it seems elegant. However it was produced by chaoitc forces. I can appreciate the desire to create an idealist narrative however, when we observe historical fact materialism explains the world we can observe better. I can see why then you find my position distateful. You feel like I would let people off the hook for bad ideas allowing bad actions to take place yes? Unfortunately bad ideas are constantly being generated and when the material forces allow them to bloom they will do so. Thus we can to constrain the material forces to constrain the ideas that are expressed. This does I suppose represent a infinite regression problem of what idea inspired this action. That is why we need to employ the philosophical tool of dialectics. The synthesis of this ideas is what allows us to reach nearest to a useful truth. Which is that for most people their lives are dicated by market forces so that they unconsciously replicate it's structures and intensify the negative externalities. We both fundamentally agree on both this and the conclusions. We just do different math to get to a similar place.
I would argue that once again this is a third version of your account which diverges significantly from the first and second And for heaven’s sake don’t condescend to me about “idealist narratives”…
So to loop back around. The Market is essentially a rogue AI. It is a handful of simple huristics about maximizing numbers. If a person doesn't do what it wants they will be removed, replaced with a person that will, and then probably die in the street. The fact that is distrubuted across companies and people doesn't negate this. It would be functionally identical if it were an actually computer someplace telling us to maximize paperclips. So long as people are trying to do what is profitable instead of what is good we are letting this system dictate our behavior. It is not a perfectly efficient comlutational substrate so there are errors and redundancy. My point of this is any tool that you would need to fight a rogue AGI would work on this system and for any rationalist to actually be using rational principles they would have to attempt to treat the market as a rogue AGI since it poses a non-zero risk of extinction for our species based on it's programed instincts.
I know how your analogy works, for god’s sake. I’m not new to this thing. I think it’s a *bad analogy*. It would be *functionally different* for a paper clip maximiser, error or no error, is my position. Market equilibria are not just different in function but in *kind* from computational outputs.
What kind of difference?
What kind of identity?
I could have a nice conversation about the philosophical details, but I’m not gonna pretend that you didn’t ride in on your rhetorical high horse with one great answer for the capitalism question, and lol at the sheeple who can’t see it I feel it doesn’t perfectly describe the situation and that the specifics matter, but fuck me for being tedious and unhelpful: what a jackass I am
Yes, we can very much agree on your last point then
Fuck me for taking capitalism seriously as an object of intellectual study instead of a towel rail to dry out your slogans on
This is a shitposting sub dude, chill a little
Don’t bullshit a bullshitter man, there you are pivoting to wanting to have a serious discussion, here you are and it’s just a shitposting sub
You could describe all life that way. Also, there is no real evidence that we actually _can_ create more "effective" outcomes. It may very well be that we as a species are not able to muster more efficient way of allocating resources, in a sense that we'd get them actually implemented and working. Humanity doesn't have pre-ordained destiny.
No, it look like pretty much every time we try to do a thing it works. Other countries do stuff all the time and it is fine. The entire walmart supply chain is based around mathmatically predicting market trends are they are highly effecrive at it. Cybernetic information theory works well in every case we have tried it. The political class like saying probelms are too complicated to fix so people stop asking them to try but that has nothing to do with reality and can be safely ignored.
You sound like you're drunk. I'm too, so no worries.
You got me. My point stands though. if we are willing to do the work we get stuff done. The problem is right now the people don't want do the required work.
Yeah I dig the idea of corporations being slow AI, too.

At this point I think we’re in science fiction territory.

Like, what if an advanced alien race is monitoring all intelligent life forms and destroying the ones potentially capable of developing artificial intelligence?

The real X-risk from developing AI isn’t the AI itself, it’s getting blown up by the aliens once we do discover it.

You read redemption ark?
nope
That’s basically the plot. Alistair Reynolds writes some pretty gourmet shit.

No, they’ve cooked up an arms race hidden just behind the speed of light.

https://scottaaronson.blog/?p=5253

That argument and Aaronson’s wide-eyed fascination with it comprise one of the more irritating tics of futurist “wow big” sci-fi thinking. Congratulations, your mate came up with some bongripping shit and it blew your head off. That doesn’t mean other people didn’t come up with it pre-2008, i.e. the date you set when bloggers expressed an interest. I’m not gonna have a go at you for turning a sci-fi idea into Very Serious Discussion but at least have the stones to acknowledge it’s a much older idea.
"> So, given our “selection bias”—meaning, the fact that we haven’t yet been swallowed up by one of the bubbles… How would we know if we have? Hanson — who would have made an excellent theologist, of the “how many angels can dance on the head of a pin” school — is assuming the existence of Godlike beings, and if Godlike beings who move at the speed of light want to place us in a zoo or planetarium, that’s exactly what they’ll do. I realize that the argument above invokes a Cartesian demon, but it’s ultimately no more or less intractible than Hanson’s own position." huh

Rare earth hypothesis is quite strong right now. “Grabby” models are gaining popularity rn, but that still assumes a currently empty neighborhood.

any recommended reading on the rare earth hypothesis?

This is how they arrive at simulationism, isn’t it?

The sci-fi bullshit we expected isn’t here. We must explain it with further sci-fi bullshit.

Does it? I don’t see why humans creating an AI which we don’t control properly needs to lead to AI creating a civilisation at all. Many scenarios posited for AI killing us all don’t require the AI to be at all human like. I’ve never heard anyone I know in the LW and adjacent communities suggest that earth alone is populated. Quite the opposite. Personally not sold on the we will all die to AI thing even if most of my friends are.

Anthropic principle resolves - not being around an AI civilization area of influence is a precondition for us to exist as observers

Careful this is just the thinking which leads you to believe agi is a problem.

Your are talking about the fermi paradox https://en.m.wikipedia.org/wiki/Fermi_paradox which often leads to talking about rhe great filter. Which might just be the AGI!

(This is a bit of a ‘yeah they actually have thought of it, and it isnt as contradictory as you think’ moment. Which also is one of the reasons scott wrote the ‘we noticed the skulls’ article)

There is a reason LW people are very much interested in disproving the Fermi paradox and the great filter stuff. Couple of years ago they had a few blog posts about new research in this area.

E: Im also sneering a bit at the people here not bringing up the fermi paradox and great filter, come on we sneer at lw people for not knowing the basics, we shouldnt do the same.

Yeah I was actually surprised to see this post, and even more surprised to find this comment so low. I thought the Fermi Paradox was basically common knowledge for anyone with a passing knowledge of space exploration/sci-fi.
The reason the comment is low is because it just was a late comment compared to the post. If you are early it gets more upvotes. Looked up one of the posts in the past here ssc talked about it: https://slatestarcodex.com/2014/05/28/dont-fear-the-filter/ and wtf, I prob knew this before and keep forgetting it, but Hanson came up with the great filter theory?
Hanson coined the term "Great Filter" in his paper [here](https://mason.gmu.edu/~rhanson/greatfilter.html) but when he talked about the possibility that the filter might just consist of multiple evolutionary "hard steps" in the past (the most likely answer to the paradox IMO), he was largely summarizing ideas that had been written up by the astrophysicist [Brandon Carter](https://en.wikipedia.org/wiki/Brandon_Carter), who also coined the term "anthropic principle"--Carter talks about the notion of "critical steps" in evolution in [this paper](http://geosci.uchicago.edu/~kite/doc/Carter_Phil_Trans_1983.pdf), on p. 149 (the number at the bottom of the page, not the upper left). One point to keep in mind about a past Great Filter is that you could have a sequence of just moderately improbable steps needed to get from the initial formation of a planetary system to a planet with intelligent life, but if you multiply them all together that could result in an astronomically tiny probability. Candidates for improbable events could include both evolutionary steps like the origin of life and the origin of eukaryote-like cells and the origin of multicellular organisms, along with unlikely events in the initial formation of the planetary system, like all the gas giants being as far from the sun as they are in our system (which seems to be very unusual compared to [known exoplanet systems](https://astrobites.org/2015/03/26/jupiter-is-my-shepherd-that-i-shall-not-want/)), or an Earthlike planet with a large moon that can stabilize its axial tilt--more discussion of possibly unlikely aspects of our solar system, even before the origin of life, can be found in the book [*Rare Earth*](https://en.wikipedia.org/wiki/Rare_Earth_%28book%29). If there are indeed multiple hard steps like this, it wouldn't take that many to get an astronomically small probability of intelligence arising. For instance, say you have 10 sequential steps that each have only a 1 in 100 chance of happening in any system that makes it through the prior steps before the planet becomes uninhabitable due to the expanding sun--together that would mean only 1 in 10^20 systems would develop intelligent life, a very tiny number given the Milky Way only has about 10^11 stars. Hanson also gave a statistical argument that if there are a number of sequential hard steps in evolution, then if we imagine sampling a vast number of planetary systems and focusing on the small subset that happened to get through all of them in time before the sun died, we should expect that on average the steps would take approximately equal amounts of time to get through in the history of the "successful" worlds, even if the probabilities of different steps were quite different (for example, if one step has a 1/100 chance and another has a 1/100,000 chance). He summarizes this idea in the "Reconsidering biology" section of the paper where he makes an analogy with a lockpicker who only has a short finite amount of time to guess a series of numbers on a combination lock, but where there are many trials so we can consider the spacing of successful guesses on the rare trials where they got through them all and picked the lock, and it turns out the spacings are about equal even if the probabilities of guessing each number are very different (he gave a technical derivation in [this paper](http://hanson.gmu.edu/hardstep.pdf), and seems to say on p. 6 that this is a new result that goes beyond Brandon Carter's analysis). He then points to some candidates for past hard steps that seem about equally spaced in time, suggesting he thinks it's plausible the Great Filter is a series of past hard steps, rather than a future civilization-killing Great Filter like rogue AI or grey goo. Paul Davies does a similar analysis in his book *The Eerie Silence*, in the section on the Great Filter--he suggests a number of possible hard steps that are spaced on average about 800 million years apart, his candidates being "first, the origin of life itself; second, the evolution of photosynthesis in bacteria 3.5 billion years ago; third, the emergence of 'eukaryotes' (large, complex cells with nuclei) about 2.5 billion years ago; fourth, sexual reproduction about 1.2 billion years ago; fifth, the explosion of large multicellular organisms 600 million years ago; and finally, the arrival of brainy hominids in the recent past". Another interesting point is that the same lockpicker style argument suggests that if the lucky worlds have hard steps about 800 million years apart, the last one should happen about 800 million years before the end of the planet's habitable period for complex life, so on most planets that do evolve intelligent life it will have happened "just under the wire" (Brandon Carter seems to have noted this point in the paper of his I linked earlier, on p. 151 where he said that his mathematical model 'implies that with a relative probability close to unity the completion of the n critical steps within the allowed time range ... will occur near the end of this range to within a fraction of the order of magnitude of 1/n'). As it turns out, there is an [independent argument](http://www.spaceref.com/news/viewpr.html?pid=908) that barring intervention by a technological civilization, the Earth will probably become uninhabitable for multicellular life somewhere between 500 million and a billion years from now, because of a long-term drop in the levels of CO2 in the atmosphere as the sun gets slowly brighter and speeds up silicate rock weathering, which draws carbon molecules from the air. In a rough way this could be seen as a successful "prediction" of the hard step model, since both Carter and Hanson were unaware of it and expressed puzzlement that the time span between plausible candidates for hard steps was much smaller than the time they supposed the Earth would remain habitable (see Carter's comments on p. 151-152 of the paper I linked, along with Hanson's comments in [this 2013 post](https://www.overcomingbias.com/2013/09/fewer-harder-steps.html)). Carter also has a 2008 paper [here](https://www.cambridge.org/core/journals/international-journal-of-astrobiology/article/abs/five-or-sixstep-scenario-for-evolution/841C9AC57BFBD5491756EB5951572B36) (paywalled, but if you are OK with using [sci-hub](https://www.science.org/content/article/frustrated-science-student-behind-sci-hub) it can be read [here](https://sci-hub.se/10.1017/S1473550408004023)) where he comments that the new estimate for the future habitable lifetime of the Earth leads him to favor a model with around 6 hard steps.
Thanks for this effortpost, and so he just was the marketeer. And you bring up a point which has been grating me for a while, people love to say stuff like 'it is time for humanity to end and give the cockroaches a chance at building civilization' and that always struck me as wrong. I don't think there will be another chance for another species civilization after humanity (but I cannot prove this of course, it just seems very hard to unburn fossil fuels, put helium back into the ground etc etc). I had never realized there is a hard cap on the habitable period for complex life, or well, I hadn't thought about the idea that this period might not be that long. (Even if I don't think 'each hard step is around 800 mil years long' is in any way valid, seems like finding a pattern just because you are looking for one, a bit like the people who go 'empires last 250 years!' and use that to predict the end of the USA (or worse, use that as an argument to hasten the end/be horrible people))
**[Fermi paradox](https://en.m.wikipedia.org/wiki/Fermi_paradox)** >The Fermi paradox is the conflict between the lack of clear, obvious evidence for extraterrestrial life and various high estimates for their existence. As a 2015 article put it, "If life is so easy, someone from somewhere must have come calling by now". Italian-American physicist Enrico Fermi's name is associated with the paradox because of a casual conversation in the summer of 1950 with fellow physicists Edward Teller, Herbert York and Emil Konopinski. While walking to lunch, the men discussed recent UFO reports and the possibility of faster-than-light travel. ^([ )[^(F.A.Q)](https://www.reddit.com/r/WikiSummarizer/wiki/index#wiki_f.a.q)^( | )[^(Opt Out)](https://reddit.com/message/compose?to=WikiSummarizerBot&message=OptOut&subject=OptOut)^( | )[^(Opt Out Of Subreddit)](https://np.reddit.com/r/SneerClub/about/banned)^( | )[^(GitHub)](https://github.com/Sujal-7/WikiSummarizerBot)^( ] Downvote to remove | v1.5)

It’s not inconceivable that were among the first intelligent species in the galaxy really. Evidence suggests that the galaxy had a fairly active core until recently that may have put a damper on the prospects of life.

This would be less us being unique and more us being the first roaches to move in after the ace got fumigated.

From what I can tell LWers use this as further proof of the necessity of “Rationalism” since all the other civilizations got wiped out somehow, possibly through a nuclear war or something before a Godlike AI could evolve. Therefore we need “Rationalism” to help guide us through whatever narrow little bottleneck of correct choices can keep us alive long enough to create “Good AI” and cosmic transcendence and all that.

One of the few “right wing” things that seems to genuinely upset Scott Alexander is pro-war hawkishness and this might have something to do with it. He penned a very strange insulting poem mocking John McCain after he died and whenever he compliments Trump he praises him for not getting us into anymore wars. Which is ludicrous, after the Trump admin killed Soleimami we got a very brief reboot of GW Bush era war hawkishness then the Magaverse just kind of lost interest but if the right defense contractor needs to do a bathroom remodel and gets in Trump or Desantis’s ear at Mar A Lago we’re absolutely doing more Iraq War-like invasions.

[deleted]

Too many unknown unknowns to know when AGI becomes a thing. Lot of work on neuromorphic computing right now, very interesting stuff.

You have rediscovered the Fermi Paradox. This happens surprisingly often on the internet these days.

I knew about the fermi paradox?
Yeah. And you just described it. If technological civilizations (in this case AI) exist and expand, where are they? That's the Fermi paradox. That's the central issue.
Yeah, I wanted to talk about the fermi paradox in relation to ai doommongering.