https://astralcodexten.substack.com/p/highlights-from-the-comments-on-acemoglu
After bashing on Acemoglu because he clearly wrote a piece on AI risk without sufficiently researching it (reasons to believe so: he disagrees with EY), Siskind answers some comments. And there, he is pretty adamant on his understanding of AI (aka machine learning, but AI sounds scarier I guess):
“I think some of the people saying this are kind of confused about how modern AI works. I’m also confused about how modern AI works, so please excuse any inaccuracies in the following, but basically:
Let’s say you want to make an AI play Go. You design some AI that is very good at learning. Then you make it play Go against itself a zillion times and learn from its mistakes, until it has learned a really good strategy for playing Go.
The AI started out as a learning algorithm, but ended up as a Go-playing algorithm (I’m told it’s more complicated than this, sorry). When people talk about “stupid algorithms” or “narrow algorithms”, I think they’re thinking of Go-playing algorithms. Certainly when we discuss “algorithmic bias” in parole or something, we’re talking about some algorithm that gets used as a strategy for deciding who gets parole. In the extreme case, this might just be a simple linear model or something. Maybe it’s c^2 + 2a + 99999b,”
So, apparently, the rogue AI danger is concrete enough to warrant the punishment of drowning in words and banishment into the land of bad analogies to anyone foolish enough to seed doubt about it, be him even Acemoglu (don’t worry, while shitting on him, Siskind makes sure to let us know how much he <3 istitutionalism. To nobody’s surprise), but not real enough to make Siskind taking a fucking introductory course in data science.
No seriously, a model able to play go at human level is a reasonable, if a bit ambitious, project you could assign at the end of any reinforcement learning course. Moreover, in the same kind of basic course, he would learn that “designing an AI very good at learning” is usually some pretty intuitive algorithm with extra steps, that it can be in fact be as mathematically simple as the linear prediction he exemplified, and that there is no fucking magic through which the “AI very good at learning” becomes “AI good at playing go”. Like, that’s what the model is fitted for. Does he squirm in awe any time that “an AI very good at minimizing the squared error becomes very good at drawing the best fit line”?
Don Yudxote has the Sisancho he deserves, I guess.
pain
seeing how this guy writes essays im not surprised he believes this
His actual response to Acemoglu is also - amazingly for someone who talks about steelmanning things all the time - a straw man. Acemoglu’s point was simply “sure looks like there are much more immediate risks than an omnipotent AI god, so we should devote more resources to immediate, known risks than speculative risks” and SA’s response was “so we should never care about the long term huh???”. Just willfully dense.
He’s still waiting for a YouTube channel to offer him a discount on The Great Courses Plus.
who does this dude think that not having an answer to the hard problem of consciousness means doomsday ai is more, not less likely? if you want to engineer something with “real” consciousness you have to understand what makes it have real consciousness (unless he thinks machines already have “real” consciousness)
proceeds to do the thing anyway
honestly, incredible self-sneer by scott here
(also they’re analogies not metaphors you fucking cretin)
the rationalist version of this meme replaces the masks with increasingly complicated analogies comparing your opponent’s intellect to the technological advancement of ancient civilisations
also, the whole fucking point of the analogy was to illustrate that AI is equivalent to the Byzantines worrying about nukes, Scott, you can’t just say ‘well, I guess if worrying about AI is equivalent to worrying about nukes then I’m wrong, but what if it’s actually equivalent to a totally different thing that is more reasonable’
Could you assign that as a course project? I thought the Go playing networks that worked well required an assload of compute.
Hot take, that Arthur C. Clarke quote is a dumb take.
There are specific philosophical assumptions in science that fictional magic systems are not obligated to follow, and it makes talking about fiction really irritating, sometimes, that everyone just mindlessly cites it.
I admit I have no idea how to convince someone who denies the existence of self-awareness…it seems obvious to me and basic.
Our whole world is built around the obvious fact that computers are not self-aware beings like us, but machines/tools. And this applies to ethics also. Computers are just things and none of this pretend theorizing will change that.
God now I can’t believe that I clicked on Scott’s stupid post. I do not want to think about that at all.