Maybe by the time the world-killer arrives, we’ll have a lot of intermediate AIs sort of on our side that are only a little less intelligent than the world-killer, and the world-killer won’t have an overwhelming advantage against us. For example, maybe in 2050, some AIs will warn us that they can see a promising route to turn virus XYZ into a superpathogen, we will have millions of AIs work on XYZ vaccines, and then the first AI smart enough and malevolent enough to actually use the superpathogen will find that avenue closed to them.
https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer
Looking forward to his seminal work on the topic, Evangelion: You Will (Not) Adjust Priors
This goes directly against the Yuddite dogma of smartN+1 beats smartN.
Heretik!
This stuff reads more and more like science fiction to me. It’s just so… unserious and unrigorous. It’s great fodder for scaring the shit out of 22 year olds and converting them to your cause based on really dodgy math. The math of infinitesimal is kinda fucked, and when you go with lim n -> 0 1/n you end up with infinity. So doesn’t matter how small the probabilities are, when you measure it against risk then you end up with ‘life is gonna end’ – which is always true. Everyone is gonna die.
I do believe that rationalism and accelerationism and e/acc and the like is religion for nerds who don’t want to believe in god.
But what if one of the good AI’s working on the vaccines is actually the world killer in disguise? Of course, enough good AI’s will expect this, so there will be lots of suspicion thrown around, until the good AI’s fight it out and destroy each other.
…which was the REAL world killer’s plan all along, who was simply waiting for this to happen before destroying us all, unopposed.
Goddamnit guys, you’re not supposed to literally reinvent pagnism. Fuck it, it’s a slow start today:
If I wanted to be charitable, I would read this as ‘most important debates on AGI to have’ but I’m here, so I’m not interested in being charitable.
As written this makes it look like everyone except Eliezer is <=50%, which isn’t true; I’m just having trouble thinking of other doomers who are both famous enough that you would have heard of them, and have publicly given a specific number.
Sweet baby Jesus.
So there’s no definition of what the bad outcome is here, it’s just “we die cause AGI” which is something of a red flag. I have a feeling this is going to be relevant later.
Not that long, apparently. It’s worth pointing out that “superintelligence” remains pretty unexamined for being so central to the “why we all die” argument; any definition of intelligence that maximizes paperclips doesn’t seem particularly super to me.
Gotta get our Einstein reference in. I know I sound like a broken record on this, but again, “smart” is not really well defined. The idea that there is only one thing that constitutes intelligence seems like a baseline assumption of these fellahs.
…
Let me rephrase: will there be sufficient angels on the head of this pin to establish dystopia?
JESUS WHY IS THIS SO FUCKING LONG
I am sure this is a completely neutral example that is not indicative of any ideological bend in the writer. I’m equally sure it will go entirely smoothly and that this is a good example that will not lead to any absurdity.
It is incredible to me that at every point, these guys display such a profound lack of imagination about the way that these entities are structured and behave. They have to square the circle of using human examples of cognition and behavior to describe things a priori inhuman in environment, senses and structure, while saying that it will be a super intelligence that will just do what we do but MOAR.
I want to point out that the Logical Positivists had zero luck in developing a universal grammar and syntax for provable statements only, which Yud doesn’t seem to understand had vastly more intellectual firepower than MIRI. Also, and I’m not an expert in the proof, but it seems like Godel would rule out, in principle, a system that is both complete enough to describe every possible interaction between agents, and logically coherent enough to be provable in the sense Yud is making.
God damn it guys, you’re literally using a plot from a GI Joe movie.
I’m snipping the really good reasons why you shouldn’t worry about this because:
…
I swear to fucking god these people are profoundly contemptible.
Then the vaccine is what gets us! * Taps head *
Other take: How long before the narrative is some version of Brave Little Toaster
Side bar, this is the plot to my personal sci fi universe I’ve been scratching away at for years. A pantheon of AIs develops, there is a “shadow war in heaven” with a benevolent caretaker-type winning and then keeping a lid on any new superhuman AIs from emerging. It’s my cheat code to having stories set in the medium future without having to deal with a singularity or powerful AI characters.
Which I guess would be a sort of impossible utopia to the Yuddites. Come to think of it, I really should include them as a messianic cult in there somewhere.
amazing
That’s not a bug, that’s a feature.
Also apparently monomaniacal computer genies/parerclip maximizers will be rebranding as ‘supercoherent AI’ in the near future.
What if instead one such smaller but nonetheless sentient AI started to manipulate events on a global scale to force a burnt out but experienced hacker to help them unite with another such small AI, not to become a world killer but rather to disolve into the matr the internet?
Yes, I too have seen Person of Interest.
I’m curious which side biological viruses would take: in favor of being weaponized by the “god” AI to overwhelm us as a means to short term success but possible destruction of their chain of dependence on biological matter, or being ruthlessly hunted down by the “lesser” AI in defense of our own existence, at risk of accidentally being too thoroughly eradicated should those lesser AI be successful at protecting us.
God, I’d hate to live with the existential crisis viruses are experiencing right now. /s