You're concerned about programs instantiated on GPUs having goals misaligned with human need.
I'm concerned about systems embedded in law and culture optimizing for the creation of profit with no regard for the systems that keep us alive
Alright, I just had a 'I feel old' moment and worried people might not get the reference.
I would read the book on paracausal magic however! (slightly related: [The Naomi Novik's Scholomance trilogy](https://pluralistic.net/2023/03/29/hobbeswarts/#the-chosen-one))
Meh, I'm in my 40's. If I had a steady internet connection in my teens there's a good chance I could have become entangled with the LessWrong types so I suppose it's lucky I had to go to my granddad's house to download shareware games.
It’s a very minor thing, but I find it fascinating how big Yud et
al. alternate back and forth between humanizing and dehumanizing their
AI god. Like, it’s so powerful an alien that it can and will want to
kill us all via unspecified magic, but it’s also so human that it
wouldn’t want to make a mess. Like, which is it bro? Why would a killer
AI give a shit about irradiating the planet a bit?
Implicit in all of this is the rationalists' assumption that their powers of reasoning and imagination are equal to those of a magic, all-knowing supercomputer.
**Rationalists:** the AI god is so smart that it anticipates all of our moves against it before we even think them
**Also Rationalists:** I know exactly what the AI god is going to do
I guess that's another thing they have in common with more traditional religions; it's amazing that God seems to always to agree with the preacher.
I think his argument is for energy efficiency, which makes sense. Also, his argument is not that it would want to kill us, but that it would kill us as a step to some weird goal. There is plenty of other sneer worth material though.
But he specifically says “messy” not “inefficient”. I think it takes a lot of unnecessarily charitable interpretation to say that Yud is talking about energy efficiency rather than collateral damage.
Por [qué](https://www.reddit.com/r/SneerClub/comments/12nh9np/the_most_powerful_thought/jgfkzxs/) no [dos](https://www.reddit.com/r/SneerClub/comments/12nh9np/the_most_powerful_thought/jgh8g3a/)?
E: your post on misscharacterising the man who says 'this looks more like my wifes room, not for me too bright' as some grand statement about women was a good third.
Those are just my responses in this thread, no? I thought my post history was exemplary of someone who doesn't understand the definition of self awareness? Is the implication supposed to be that in calling a community generally uncharitable, I myself am being uncharitable and therefor lack self awareness?
I would never say something as uncharitable as that. [And I would also never use subtext.](https://www.youtube.com/watch?v=Yk7M2jGdnxU).
And correction I meant the word charitable, not selfaware.
The irony of you completely mischaracterizing my interaction in the thread about interior lighting as a consequence of your own uncharitability is enough on its own to display a closed loop in your thinking worthy of laying this exhausted conversation to rest. Have a good one bud.
Right, and why wouldn't it cripple itself with normal anxiety?
If nothing really bad will happen if you don't act, and then at some point that equalizes, then how to decide which end to focus on?
The action to kill all humans that might fail?
Or the action to prevent it being necessary to take that action just yet?
So I'm new here but I don't get the joke. Yudkowsky is clearly saying there's more efficient ways to kill all humans than nukes. He usually goes with biological warfare. Nobody knows how much of a threat AI is until it decides to kill us. In which case we might already be dead. Am I missing something? His concern seems plausible to me. Lol I've seen terminator afterall.
See there is the flaw in your thinking. Why would the superintelligence need biological weapons when it can kill with thoughts. It is a singularity style superintelligence after all and int is a superpower, so we literally cannot tell what it can or cannot do (which is good for us that means we dont actually have to do a serious threat analysis and mitigation plan, which sounds like work and exposes us to real reactions. Better to stay at vague probabilities, and remember 0 is not a probability).
So yeah you did a motte and bailey and as penance you will have to recite the sequences, twice.
Ok I'm missing something here. It seems like you're teasing Yudkowsky for saying that a super intelligence could kill us with it's thoughts. I'm not aware of him saying that. In fact I can site multiple times where he's stated it would use biological warfare. There's plenty of stuff to make fun of Yudkowsky for, but this thread seems to be making fun of him for a position he doesn't hold.
I'm (and others) making fun of him for using the motte/bailey argument that the superintelligence will be unknowable superintelligence which can use any method to kill us, and then also falling back on the motte/bailey (I forget which was which, cross the wrong one out) that it will be biological warfare. (Which doesn't even have to make sense, provoking various nuclear powers into an exchange with each other at least has a higher level of plausible deniability, a computer is pretty weak to being turned off after all, biological warfare was the problem of [Madagascar closing its ports after all](https://pbs.twimg.com/media/DLoOeBGWkAA1j08?format=jpg&name=medium)).
Im basically making fun of him for both having a specific monofocus of potential deaths (the biological warefare) and him playing 'my dad has a strength of infinity plus one!'. I mean this is the guy who says that an alien superintelligence from a different universe can reconstruct all our whole physics system out of a [single picture of a blade of grass.](https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message).
I'm also just trying to be amusing. After all seeing the man who spend 20 years preparing for this moment to be all out of ideas after having nothing is funny. (And it will stop being funny as soon as they start handing out the ~~Kool Aid~~ [Flavor Aid](https://en.wikipedia.org/wiki/Flavor_Aid), which I hope will not happen and by I'm trying to will it into not happening by predicting it as a bad predictor)
I miss people [talking about nanomachines taking over the world](https://www.lesswrong.com/posts/5hX44Kuz5No6E6RS9/total-nano-domination).
Thanks for explaining it. I feel like I'm missing the rhythm of this whole sub. I may be wrong, but I think I could defend his position here and I don't believe he's doing a motte and bailey. I think he's saying that one of the many ways it could decide to kill us is by using biological warfare that would catch us all at the same time.
It's like predicting how a fight between me and Conor Mcgregor would go down. I say he knocks me out with his signature left hand and you say you'll cut his left arm off to prevent that. But he can still head kick me, and hit me with his right hand, and choke me, and, and, and. He's too op for my non fighting ass to handle.
I think there's legitimate reasons to criticize Yudkowsky, but it comes off weird to me when people criticize someone on a reasonable take. Like the dude can't literally be wrong always. I think he's way too pesimesstic on his predictions of how likely AI is to want to kill us. But if it did decide to kill us I think it's a serious threat that may be hard to stop.
He doesn't actually say here it is biological warfare, at least not in the tweet image above, or the tweet he is reacting to. You filled that in yourself.
I'm saying the whole 'threats by superhuman AGI' is a motte bailey in itself, on the one hand it is 'nanomachines, grey goo, pandemics, biological chemical warfare, nukes, manipulating humans into doing its bidding etc or a combination of all of them (iirc it is how he gives some of his biological weapons oomph, by saying they will be created by some imagined nanofactory), or just some vague talk about "a thing the agi is doing which might not seem dangerous to us at first"', and then here by you just biological weapons.
In the Conor Mcgregor analogy, it would be saying that he will just destroy you with his nipples which are actually nanofactories which create plague specifically engineered to kill you' and then when pushed going 'he will use his superior knowledge of biology', which seems like a weird way of saying he just trained well to punch better.
And if you then say, well we should just make sure that Conor Mcgregor doesn't punch people, all the weirdness about him actually being a super cyborg comes out. And that we only got one try at getting it right, because Conor Mcgregor will take over the world.
And well, I think you might be thinking a bit too deeply about a joke. This sub has a bit of a more loose tone at times.
[This post](https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities) from Yud, even says that his goto example is in fact nanomachines (so I was also wrong, he is still talking about nanomachines) to create a biological nanofactory to create his vague nanomachine plague? rockets are also involved somehow (I don't see why, not like covid had a hard time spreading, it is easier if you somehow involve magical nanobots) So it is all quite silly. We here just upped the silly by saying it is all by the power of the AGI's mind, like some sort of robotgod Xavier. If he was just saying the AGI would create a biological weapon it would have been a lot less silly (and also, not as wordy, but [the dying wizard style is hard to beat.](https://www.reddit.com/r/SneerClub/comments/axmwsv/sneerquenceette_the_dying_wizard/) (not a totally good fit as there doesn't seem to be a heavy emotional centering in this piece (which I didn't read to conclusion btw, because oof can the man not write))
> Losing a conflict with a high-powered cognitive system looks at least as deadly as "everybody on the face of the Earth suddenly falls over dead within the same second".
It will also happen all at the same time.
>And well, I think you might be thinking a bit too deeply about a joke. This sub has a bit of a more loose tone at times.
So that's totally fair and probably true. I've been told that before lol.
>This post from Yud, even says that his goto example is in fact nanomachines (so I was also wrong, he is still talking about nanomachines) to create a biological nanofactory to create his vague nanomachine plague?
Yeah so I misunderstood his sentiment until reading your link. But the problem is that his concern seems reasonable to me. Could a super intelligence figure out how to create nano tech? Maybe. Could it use that tech to kill us all. Probably.
He's stating one possible way that AI could kill us if it's not properly aligned. Not predicting exactly what will happen. But even if you don't think it's smart enough to invent nano tech, biological warfare still seems incredibly easy to me. We all need air and water. Both of which are great vessels to transmit plagues.
This goes back to my point that it feels like people are making fun of him for non existent things. He says "here's one way things can go wrong" and people tease him for saying this is what's going to happen. I agree that his specific example seems farfetched, but that doesn't mean he's saying that it's likely.
Yeah, I don't think these concerns are reasonable at all, I'm constantly reminded of how after 9/11 we (as a society) had all these crazy movie plot threats (A term I have stolen from Bruce Schneier) and how that impacted real life security (and the feeling of security) and made things worse (and a few dodgy security companies, who didn't mind that they were protecting not against real terrorists but against the fears of people who had read too many Tom Clancy novels, rich).
(I also think a large percentage of the axioms needed for the superintelligence to kill us all are so extremely unlikely I put the threat of it a little bit above false vacuum collapse in the category of things I worry about).
Oh. I'm not really sure how to respond to that. AlphaGo was better than any human at go in a matter of hours. I think it's not only reasonable, but likely that in under 50 years we're going to have a super intelligence that's better than humans at everything. Award winning books, movies and TV shows will be created entirely by AI. Some of the biggest streamers on whatever medium is around at that time will be AI. Cars will be run entirely on AI. Economists, politicians, businessmen, military leaders and teachers will consult with AI before making decisions.
That all seems inevitable to me. But maybe you think it's less likely. If you'll grant that it's even possible though, then I don't see how that doesn't have some amount of threat if that AI decided to kill us. And when we're talking about potential human extention events the likelihood doesn't have to be very high at all before it should at least be considered.
This is exactly the long list of assumptions im talking about, the idea that AGI would follow from AlphaGo (or even that it is better than any human, people have beaten AlphaGo), etc. Every one of those linked assumptions just makes the whole concern infismall to me. And the willingness of the Rationalist community to jump on any advance in machine learning and go 'This is the next step. DOOOOOOOOOM!!' no matter how much it doesn't make sense makes me think less and less of them (esp as their output on actually helping to align their mythical superintelligence is so poor).
I guess we just have a fundamental divide. Short of catastrophic events happening, I don't think my predictions can be avoided even if we tried. And from what I've seen a lot of the futurist experts tend to think that as well. Your thoughts appear to me like someone saying that an early car will always be too primitive to replace a horse.
The question isn't "if" but "when" AGI will arrive. Do you really think we won't ever have AGI? Not in 500 years? 1,000? 10,000? You must agree that eventually it's inevitable right? I'm assuming you just think it's far enough away that it's not a concern. And maybe that's right. But eventually it will be a concern so I'm glad somebody is working on the problem now.
> Your thoughts appear to me like someone saying that an early car will always be too primitive to replace a horse.
I'm more saying to not invest in flying cars, after you have seen a few combustion engines.
And yeah, I highly doubt AGI is possible on our current computing paradigm. So I don't think it is that inevitable at all.
And if it is inevitable, and looking at the fermiparadox, where are the AGI? And something smarter than humans (for some vague definition of smart) perhaps, but unbounded exponential growth of intelligence, I doubt it, will be fun to see what kind of mind cancer robot intelligence can get however.
> But eventually it will be a concern so I'm glad somebody is working on the problem now.
But are they? Really? It seems to me they went from 'this will be a problem in the future' to 'fuck we are all doomed' without actually working on the problem, there was a lot of work done on complaining how all humans don't have the correct words to define things and Yud is the only one with the right skills to see the coming apocalypse however.
>I'm more saying to not invest in flying cars, after you have seen a few combustion engines.
But why not? We have the technology for flying cars. They're called planes and helicopters. The problem isn't that we don't have the technology, it's that normal everyday humans aren't skilled enough at using them safely. I fully expect to see some form of individually owned flying transportation once AI is driving them.
>And if it is inevitable, and looking at the fermiparadox, where are the AGI?
I don't know. But the Fermi paradox is a paradox. Nobody knows why we don't see evidence of other life. Which is why it can't be used as evidence of one thing or another. It doesn't support no AGI anymore than it supports me saying they all died because they created AGI. It's a good question that nobody has the answer to.
>And yeah, I highly doubt AGI is possible on our current computing paradigm. So I don't think it is that inevitable at all.
How? Technology has always gotten better and is currently improving exponentially. You referenced 9/11 earlier so you're at least my age. I remember no cell phones, 90's shitty internet, carrying around an entire CD player for the 15 songs on a Nirvana CD. The world is vastly different today. I could download Nirvana's entire collection of work to my phone in under 20 minutes and not even take up 1% of the space on my phone, which fits in my pocket way better than any CD player ever did.
The biggest change for my grandparents was tv getting color. Their grandparents hardly had any massive technological improvements at all on a regular person level. The last 30 years had more technological improvements then the 100 years before it.
Combine that with the fact that we already have computer programs as good or better than humans at some of the most difficult human tasks possible, how does it not stand to reason that they will surpass us EVER? And I haven't even thrown in quantum computing into this because I honestly can't understand wtf it is. But I read that it's super fast and can crack hashes that would take super computers trillions of years.
This feels to me like someone saying flight is impossible. I know of no laws of physics that would imply that AGI is impossible. And my understanding of Moore's law indicates to me that AGI is an inevitable point on the graph. So what's the logic here?
>But are they? Really? It seems to me they went from 'this will be a problem in the future' to 'fuck we are all doomed' without actually working on the problem.
Which is exactly one of the things I'm critical of Yudkowsky about. His position seems too pessimistic to me.
He’s not wrong tho. I don’t need nukes to kill EY, I think I could
manage it with a knife and hanging around the right places in san
francisco. And although I will happily mock him, I don’t claim to be any
smarter myself.
[Maybe ChatGPT is already doing trial runs.](https://www.nbcnews.com/news/us-news/cash-app-founder-bob-lee-was-stabbed-death-argument-suspects-sister-co-rcna79741) It is surely well within the scope of ChatGPT's powers to convince Yudkowsky to have an affair with a volatile techbro's sister.
edit: I bet James Cameron feels silly now that ChatGPT can show him how it's really done
You are concerned about AI reinforcing human biases.
I am concerned about AI causing human extinction by awakening the Paracausal logics of the Black Garden and unleashing them upon humanity.
We are not the same.
yud was invented by ai to motivate us all to bang our heads against a wall until we die
Why does Yud, the one with the biggest brain, not simply eat the other brains?
It’s a very minor thing, but I find it fascinating how big Yud et al. alternate back and forth between humanizing and dehumanizing their AI god. Like, it’s so powerful an alien that it can and will want to kill us all via unspecified magic, but it’s also so human that it wouldn’t want to make a mess. Like, which is it bro? Why would a killer AI give a shit about irradiating the planet a bit?
He’s not wrong tho. I don’t need nukes to kill EY, I think I could manage it with a knife and hanging around the right places in san francisco. And although I will happily mock him, I don’t claim to be any smarter myself.
An Atomic Age Yudkowsky would drown out the CND by raving about ray guns that can blow up the moon.
🦜
What a disingenuous characterization