I mean the universe isn’t THAT young-when our solar system formed it was already like 9 billion years old. if this incredibly dangerous ai alignment problem was so acute then wouldn’t we see spheres of ai colonized engineered galaxies if they are so unfathomably powerful and smart? we know intelligence can evolve and that life seems like it’s not too difficult to get started given how early it got rolling on earth, so the idea that we are unique seems implausible given the sheer scale of the universe and how much time has passed.
Do the LW crowd think that Earth is absolutely unique and that we’re the first intelligent species in this galaxy to develop?
Edit: Thanks for all the great responses everyone, given me a lot to think about. I can feel my body being consumed by new growth of hair and have an irrepressible urge to write a ten thousand word essay on the necessity of Bayesianism in the MLP fandom!
Over 99.9% percent of the universe is hostile to life. If an AI developed sentience it’d leave for parts unknown.
The wouldn’t have any reason to bother with fighting us for the less than .01 percent of it we like. If we are going with the worst dark forrest senario which is the most unlikely great filter. They would still have no reason to expose themselves.
It is simply projection on their part that tbe AI would behave the way we do. Mathmatically game theorybwise it makes no sense for them to do so. So we wouldn’t necessarily see evidence of it, because simply they would have no reason to care about us whatever.
At this point I think we’re in science fiction territory.
Like, what if an advanced alien race is monitoring all intelligent life forms and destroying the ones potentially capable of developing artificial intelligence?
The real X-risk from developing AI isn’t the AI itself, it’s getting blown up by the aliens once we do discover it.
No, they’ve cooked up an arms race hidden just behind the speed of light.
https://scottaaronson.blog/?p=5253
Rare earth hypothesis is quite strong right now. “Grabby” models are gaining popularity rn, but that still assumes a currently empty neighborhood.
This is how they arrive at simulationism, isn’t it?
The sci-fi bullshit we expected isn’t here. We must explain it with further sci-fi bullshit.
Does it? I don’t see why humans creating an AI which we don’t control properly needs to lead to AI creating a civilisation at all. Many scenarios posited for AI killing us all don’t require the AI to be at all human like. I’ve never heard anyone I know in the LW and adjacent communities suggest that earth alone is populated. Quite the opposite. Personally not sold on the we will all die to AI thing even if most of my friends are.
Anthropic principle resolves - not being around an AI civilization area of influence is a precondition for us to exist as observers
Careful this is just the thinking which leads you to believe agi is a problem.
Your are talking about the fermi paradox https://en.m.wikipedia.org/wiki/Fermi_paradox which often leads to talking about rhe great filter. Which might just be the AGI!
(This is a bit of a ‘yeah they actually have thought of it, and it isnt as contradictory as you think’ moment. Which also is one of the reasons scott wrote the ‘we noticed the skulls’ article)
There is a reason LW people are very much interested in disproving the Fermi paradox and the great filter stuff. Couple of years ago they had a few blog posts about new research in this area.
E: Im also sneering a bit at the people here not bringing up the fermi paradox and great filter, come on we sneer at lw people for not knowing the basics, we shouldnt do the same.
It’s not inconceivable that were among the first intelligent species in the galaxy really. Evidence suggests that the galaxy had a fairly active core until recently that may have put a damper on the prospects of life.
This would be less us being unique and more us being the first roaches to move in after the ace got fumigated.
From what I can tell LWers use this as further proof of the necessity of “Rationalism” since all the other civilizations got wiped out somehow, possibly through a nuclear war or something before a Godlike AI could evolve. Therefore we need “Rationalism” to help guide us through whatever narrow little bottleneck of correct choices can keep us alive long enough to create “Good AI” and cosmic transcendence and all that.
One of the few “right wing” things that seems to genuinely upset Scott Alexander is pro-war hawkishness and this might have something to do with it. He penned a very strange insulting poem mocking John McCain after he died and whenever he compliments Trump he praises him for not getting us into anymore wars. Which is ludicrous, after the Trump admin killed Soleimami we got a very brief reboot of GW Bush era war hawkishness then the Magaverse just kind of lost interest but if the right defense contractor needs to do a bathroom remodel and gets in Trump or Desantis’s ear at Mar A Lago we’re absolutely doing more Iraq War-like invasions.
[deleted]
You have rediscovered the Fermi Paradox. This happens surprisingly often on the internet these days.