r/SneerClub archives
newest
bestest
longest
"Any sufficiently advanced technology is indistinguishable from magic. Especially if you don't even try to understand it. No seriously dude, why are you even talking about that?" Arthur C. Clarke if he had seen rationalists, probably (https://www.reddit.com/r/SneerClub/comments/p300he/any_sufficiently_advanced_technology_is/)
80

https://astralcodexten.substack.com/p/highlights-from-the-comments-on-acemoglu

After bashing on Acemoglu because he clearly wrote a piece on AI risk without sufficiently researching it (reasons to believe so: he disagrees with EY), Siskind answers some comments. And there, he is pretty adamant on his understanding of AI (aka machine learning, but AI sounds scarier I guess):

“I think some of the people saying this are kind of confused about how modern AI works. I’m also confused about how modern AI works, so please excuse any inaccuracies in the following, but basically:

Let’s say you want to make an AI play Go. You design some AI that is very good at learning. Then you make it play Go against itself a zillion times and learn from its mistakes, until it has learned a really good strategy for playing Go.

The AI started out as a learning algorithm, but ended up as a Go-playing algorithm (I’m told it’s more complicated than this, sorry). When people talk about “stupid algorithms” or “narrow algorithms”, I think they’re thinking of Go-playing algorithms. Certainly when we discuss “algorithmic bias” in parole or something, we’re talking about some algorithm that gets used as a strategy for deciding who gets parole. In the extreme case, this might just be a simple linear model or something. Maybe it’s c^2 + 2a + 99999b,”

So, apparently, the rogue AI danger is concrete enough to warrant the punishment of drowning in words and banishment into the land of bad analogies to anyone foolish enough to seed doubt about it, be him even Acemoglu (don’t worry, while shitting on him, Siskind makes sure to let us know how much he <3 istitutionalism. To nobody’s surprise), but not real enough to make Siskind taking a fucking introductory course in data science.

No seriously, a model able to play go at human level is a reasonable, if a bit ambitious, project you could assign at the end of any reinforcement learning course. Moreover, in the same kind of basic course, he would learn that “designing an AI very good at learning” is usually some pretty intuitive algorithm with extra steps, that it can be in fact be as mathematically simple as the linear prediction he exemplified, and that there is no fucking magic through which the “AI very good at learning” becomes “AI good at playing go”. Like, that’s what the model is fitted for. Does he squirm in awe any time that “an AI very good at minimizing the squared error becomes very good at drawing the best fit line”?

Don Yudxote has the Sisancho he deserves, I guess.

These arguments usually end with me accusing these people of being
p-zombies, which tends to make them angry (or at least cause them to
manifest facial expressions and behaviors associated with anger).

pain

The worst thing is he thinks that parenthetical remark is a thermonuclear line
Siskind spends so much of his time pretending to think critically when everything just gets lined up with his starting assumptions. Of course he'd imagine conversations where he owns people.

Non-self-aware computers can beat humans at Chess, Go, and Starcraft. They can write decent essays and paint good art.

seeing how this guy writes essays im not surprised he believes this

His actual response to Acemoglu is also - amazingly for someone who talks about steelmanning things all the time - a straw man. Acemoglu’s point was simply “sure looks like there are much more immediate risks than an omnipotent AI god, so we should devote more resources to immediate, known risks than speculative risks” and SA’s response was “so we should never care about the long term huh???”. Just willfully dense.

It was worse than that. He contrived an entire argument that wasn’t even being made… that somehow Acemoglu was saying the future risks cant ever happen because AI already poses a threat now. Very strange reading of the article. I wonder if Siskind can read good lol

but not real enough to make Siskind taking a fucking introductory course in data science.

He’s still waiting for a YouTube channel to offer him a discount on The Great Courses Plus.

who does this dude think that not having an answer to the hard problem of consciousness means doomsday ai is more, not less likely? if you want to engineer something with “real” consciousness you have to understand what makes it have real consciousness (unless he thinks machines already have “real” consciousness)

probably yes, because that means somebody could accidentally create a self conscious or self improving, as [Peter Watts has shown](https://en.wikipedia.org/wiki/Blindsight_(Watts_novel)) that self consciousness and intelligence don't have to be linked, AGI, and without giving the AGI human friendly goals it could doom us all!
"has shown" might be a bit strong - perhaps "has speculated"
Taken from the Rationalist perspective, so SF is real! (E: and yeah, obv the evidence is mega weak, but I learned of Watts via r/ssc, and iirc people have used his work in arguments there, so that is why I mentioned it).
I know it was relevant to your comment, but I always think it's unfair that Watts just gets cited for Blindsight when he has [so many other neat speculative ideas](https://www.rifters.com/crawl/?p=6315).
Did he really find or photoshop a picture of a yogurt inside a body? That's too funny
I have no idea. Perhaps he found it on the internet, which would be even weirder.
Yeah Watts has a lot of neat ideas. His underwater series was also pretty weird. (sorry forgot the name). Not sure how Rationalist adjacent Watts is, so don't take me naming him here as me sneering at him.
Rifters! Starfish & sequels. Shit's fucked up. I do love his Sunflowers series.
Tournesol is the French name for Sunflower, the literal translation is ‘Turned Sun’, in line with the plants’ ability for solar tracking, sounds fitting. The Spanish word is El Girasolis.
thank u
You really don't if it is an emergent property of sufficiently complex problem solving systems. Which it is. Otherwise how did we get it?

I have no right to complain here because I am kind of Patient Zero for overly-labored-metaphor use.

proceeds to do the thing anyway

honestly, incredible self-sneer by scott here

(also they’re analogies not metaphors you fucking cretin)

Maybe the real lesson here is that we should only worry about the most medium-term of risks? That way we can accuse the people worrying about nearer-term risks than we are of being like Byzantines worried about fireworks, and the people worrying about longer-term risks than we are of being like Byzantines worrying about nukes - whereas we ourselves are clearly most like Byzantines worried about Ottoman cannons.

the rationalist version of this meme replaces the masks with increasingly complicated analogies comparing your opponent’s intellect to the technological advancement of ancient civilisations

also, the whole fucking point of the analogy was to illustrate that AI is equivalent to the Byzantines worrying about nukes, Scott, you can’t just say ‘well, I guess if worrying about AI is equivalent to worrying about nukes then I’m wrong, but what if it’s actually equivalent to a totally different thing that is more reasonable’

Could you assign that as a course project? I thought the Go playing networks that worked well required an assload of compute.

See e.g. [Deep Learning and the Game of Go](https://www.manning.com/books/deep-learning-and-the-game-of-go): > Deep Learning and the Game of Go introduces deep learning by teaching you to build a Go-winning bot. According to one of the Amazon reviews, you "end up with a bot that you can perceive to play intelligently (though not at the AlphaGo level.)" Much of the idea that Go is intractable to implement dates back to before recent ML techniques like deep reinforcement learning. The so-called "deep learning revolution" only began around 2012.

Hot take, that Arthur C. Clarke quote is a dumb take.

There are specific philosophical assumptions in science that fictional magic systems are not obligated to follow, and it makes talking about fiction really irritating, sometimes, that everyone just mindlessly cites it.

It can be annoying, but I think the fundamental premise is correct Look at something like phlogiston or mesmerism: as Hasok Chang has pointed out about the former and my friend Nathan Oseroff-Spicer has pointed out about the latter, these concepts both had within limited circumstances “surprisingly” practical and effective explanatory power The undergirding point here is that in fact *in science* the *rhetoric* that science has specific (generally materialist) assumptions simply doesn’t track with the history or actual practice of science, even if many though not all practitioners of science believe that to be the case
I know that things like falsificationism and the like have been rejected, but it's my impression that, especially in modern science, there's a nearly universal agreement that the natural laws of science are coherent. Because quantum gravity and relativity contradict each other, scientists expect to find a theory of quantum gravity that can explain both observations without a contradiction.
I actually wrote about this, pertaining to so-called “natural laws” in my MSc My position is that scientists have a reasonable expectation of some final theory - within physics On the other hand, the relagation of the so-called “special sciences” to a lower status than physics is a mistake, and foundational grounds in physics for other sciences are not only boring but trite and ignorant
I would agree with this, but even outside physics, while there may not be a "final theory" that explains everything, there's still the assumption that our observations won't contradict each other--if they do, it's actually because of hidden factors we aren't considering, which if fully understood, would make our observations no longer contradictory. There's an assumption that the universe makes sense and is rational, which I think makes sense if you look at our universe... but I don't think it does if you look at something like the SCP Foundation. There, nothing makes sense, the rules are more like polite suggestions, and anomalies operate according to a bunch of rulesets that are not only mutually exclusive with what we're familiar with, but mutually exclusive with each other. I think it represents a major unexamined assumption, and a significant failure of imagination, that people just *assume* that everything in those settings is ultimately explainable and capable of being integrated into the same set of physical laws we're all familiar with.
Also, would you be willing to link to where you wrote about it, if that's available online? It sounds like an interesting read.
I’ve had plans for several years to revise it and put it up online, but I also have other things going on
In the meantime you can read Jerry Fodor’s excellent commentaries on the “special sciences” as an intro to this position

I honestly have no idea what self-awareness is or what it even potentially could be.

I admit I have no idea how to convince someone who denies the existence of self-awareness…it seems obvious to me and basic.

But also, who cares?

Our whole world is built around the obvious fact that computers are not self-aware beings like us, but machines/tools. And this applies to ethics also. Computers are just things and none of this pretend theorizing will change that.

God now I can’t believe that I clicked on Scott’s stupid post. I do not want to think about that at all.