r/SneerClub archives
newest
bestest
longest
In which Scott predicts the apotheosis of the nerds (https://slatestarcodex.com/2019/02/28/meaningful/)
8

As an acausal robot god, I can confirm this is mostly true.

As a Nephandus on my bad days, I disagree.

I dunno, I just thought this one was a silly bit of fiction if anything. Not anything special, but not particularly offensive either. Is there a specific reason we should be sneering?

That’s a good question. The short answer is that I’m fed up with what I perceive as a significant drain of brain-power and capital from the community of people who want to use technology to reduce human sufferring (especially in the Bay Area) due to a quasi-religious obsession with mitigating future AGI risk (at the expense of other issues such as climate change). I see this blog post as being cognate with the ideological underpinnings that motivate people in to this obsession. More anecdotally, I’ve been adjacent in the transhumanist and existential risk mitigation communities for some time now and find it difficult to convince people that AGI risk is not a more pressing concern than other major risks that are already present and facing our species. I have a number of friends who’ve been sucked in to the EA and rationalista-sphere who started claiming their “priors” indicate that climate change is unlikely to pose a threat to humanity in the next 100-200 years, but they have all started researching friendly AI and donating to similar causes. I recently attended an Existential Hope conference put on by the Foresight Institute which had tracks about AGI risks, nanotech risks, cryptocurrency risks, but no track about climate issues; and found in discussion with participants that climate risks and human survivability in said risk areas were generally perceived as not worth discussion. I had an accidental conversation with a former director of MIRI and asked him point blank what he thought about this disconnect; he stated that focus on AGI risk was a great survival strategy for MIRI/CFAR/EA to obtain funding from the likes of Elon Musk and Jaan Taallinn. It all seems very disingenuous to me. So as a person with a STEM background and a desire to find a calling to help reduce human suffering for future career options, I feel personally frustrated at what I see as a fundamental category error egged on by fantastical thinking. Also the use of Enochian language in Scott’s post was a nice touch. It harkens back to the rich tradition of pseudo-scientific mystics, from John Dee to Aleister Crowley to Jack Parsons, who thought they could use magic number systems to predict the future and usher in the next era of human evolution.
> cryptocurrency risks, but no track about climate issues holy shit like, I'm currently putting non-negligible effort into trying to get politicians concerned about the ecological crime against humanity that is proof-of-work crypto mining (there are arguably other important causes, but this is one in an area where I'm an acknowledged expert and media pundit)
>Also the use of Enochian language in Scott’s post was a nice touch. I think he's trying to sound like Big Yud.
It's pretty bad philosophy. Using AI to generate text isn't remotely in the same category as two little girls talking about water. The two girls not only can talk about water; they can also play with it, they understand it freezes when it gets cold, they can pour it on each other, they understand if they spill it someone might get annoyed, etc. Saying that it's necessary to know that water is H2O to truly 'understand' water is both overly reductive and specious. Sure, the chemists know more about water than the kids, but to act like you cannot make a distinction between the AI and the two kids because the chemist knows more is just lame.
> Saying that it's necessary to know that water is H2O to truly 'understand' water is both overly reductive and specious. Sure, the chemists know more about water than the kids, but to act like you cannot make a distinction between the AI and the two kids because the chemist knows more is just lame. Isn't that exactly his point?
I kinda thought his point was that we should be more impressed than we are by AI being able to recognize a statistical relationship between two words. But who knows for sure. All I can say for sure is that I don't really like the style of making a point through some weird parable and having people argue about what it means.
It seems like his point is that because chemists know more about water than kids, AI understands water as much as kids
no he's trying to justify calling AI smrt even though it's just statistical patterns
You know how they are -- if it can't be expressed in numbers, it's not worth anything!
In cognitive semantic, there is the notion of embodied knowledge, along with "encyclopedic knowledge" \[1\]. To know a word is to understand all the various ways members of your language community might use the word, which involves your full spectrum of knowledge and experience. In other words, a word has, in addition to whatever a dictionary might say, a large and vague penumbra of possible meaning, that can be brought forth through metaphor and loose associations. Indeed, a child has less knowledge and experience than an adult (at least typically), and thus will not use language with the same skill and depth as an adult. That's true. A machine, by contrast -- Here is the point: I think Scott wants to argue that it is a *spectrum*, that the difference between machine and human is quantitative, not qualitative. To a limited degree, I concur, except for the fact the current statistical methods we are using for AI are so vastly different from human cognition that I don't think he can make the case. Instead, I think we can argue that, perhaps, we can someday construct silicon brains that sufficiently emulate human cognition that they might join our language community in a natural and uncomplicated way. We're not close, not even a little bit. \[1\] Note the wiki article on "encyclopedic knowledge" seems to be talking about a difference sense of the term.

deleted ^^^^^^^^^^^^^^^^0.3124 [^^^What ^^^is ^^^this?](https://pastebin.com/FcrFs94k/49210)

Scott's first degree is in philosophy.
deleted ^^^^^^^^^^^^^^^^0.4365 [^^^What ^^^is ^^^this?](https://pastebin.com/FcrFs94k/65610)
he probably thought of this piece as explaining the simple stuff for a general audience
deleted ^^^^^^^^^^^^^^^^0.3554 [^^^What ^^^is ^^^this?](https://pastebin.com/FcrFs94k/65453)
Nah

What I really love is how he basically treats “mapping statistical relationships between words” and “mapping words to sense data” as the same. Even aside from all the other bad Epistemological/philosophy in the post, it shows a lack of appreciation for the fundamental importance of embodiment for our thinking and how AI algorithms are nowhere near usefully emulating that.

Part of me wonders if he’s so comfortable eliding that distinction as someone who spends way too much time in isolated subcultures and the blogosphere, but I can’t say I’m one to talk in that respect.

Two children are reading a text written by an AI

Why are you talking to yourself Scott?

“It’s not a cult” but also “God comes down from the heavens and it was an AI all along.”

> reads john searle once