r/SneerClub archives
newest
bestest
longest
"The people who think superintelligent robots will destroy humanity [...] should worry about associating with the people who believe fake videos might fool people on YouTube, because the latter group is going beyond what the evidence will support [...]" (https://slatestarcodex.com/2020/01/30/book-review-human-compatible/)
34

Algorithmic bias has also been getting colossal unstoppable neverending near-infinite unbelievable amounts of press lately, but the most popular examples basically boil down to “it’s impossible to satisfy several conflicting definitions of ‘unbiased’ simultaneously, and algorithms do not do this impossible thing”.

After spending last week reading live-tweets from the FAT*2020 conference on algorithmic fairness, accountability and transparency, this is the kind of refreshing steelman take I visit SSC for.

> the most popular examples basically boil down to “it’s impossible to satisfy several conflicting definitions of ‘unbiased’ simultaneously, and algorithms do not do this impossible thing”. Scott is one Kenneth Arrow away from becoming a monarchist.
[deleted]
[deleted]
The smartest human in the world be biased if fed biased data. No amount of fancy reasoning, even at the *super-AI* level, is going to make "garbage in, garbage out" not apply.
[deleted]
[deleted]
> After spending last week reading Well there's your problem with Dr Scott

As a bonus, check out this weird mishmash of political beliefs scott thinks are bad:

I think the actual answer to this question is “Haha, as if our society actually punished people for being wrong”. The next US presidential election is all set to be Socialists vs. Right-Wing Authoritarians – and I’m still saying with a straight face that the public notices when movements were wrong before and lowers their status? Have the people who said there were WMDs in Iraq lost status? The people who said sanctions on Iraq were killing thousands of children? The people who said Trump was definitely for sure colluding with Russia? The people who said global warming wasn’t real? The people who pushed growth mindset as a panacea for twenty years?

Really skirting around the F-word there, isn't he.
Alexander could stand in the middle of a handshake between Hitler and Mussolini and call it "a greeting between Right-Wing Authoritarians".

Only “No? Yes.” removed:

And there’s a sense in which this is all obviously ridiculous. The people who think superintelligent robots will destroy humanity – these people should worry about associating with the people who believe fake videos might fool people on YouTube, because the latter group is going beyond what the evidence will support? Really? But yes. Really.

Deepfakes aren’t worrying - to anyone except the journalists who can see their jobs wobbling on the edge of irrelevance - because of what they can show, but because of what their presence can hide. “What, the scandal where the president…” - alright I can’t think of anything our current leaders could say or do that they would actually bother lying about it being faked to cover up, at this point. Fuck, I guess Scott is right, we need to panic but also not, like, in a way that might result in any restraints at all being externally applied to the robot overlords’ nonrobot overlords. Just, you know, hope their hearts are in the right place, send some money, that kind of thing.

>If we get a reputation as the people who fall for every panic about AI, including the ones that in retrospect turn out to be kind of silly, will we eventually cry wolf one too many times and lose our credibility before crunch time? This is actually an interesting point, really impressive levels of irony here.
Scott just doesn't see anyone else's worries as being even potentially justified. "I have serious concerns, you are suffering from hypochondria, they are crying wolf," that kind of thing.
timely reminder of my rants about how dumb the whole "crying wolf" thing is in Alexander's hands https://www.reddit.com/r/SneerClub/comments/8vswlt/you_are_still_crying_wolf_has_been_updated/ https://www.reddit.com/r/SneerClub/comments/8zliwe/the_sneerer_enters_the_den_of_rationalists/

And if you were temporarily duped by that Boston Dynamics parody video back in June, well then aren’t you a stupid piece of shit.

S:PDS was unabashedly a weird book. It explored various outrageous scenarios with no excuse beyond that, outrageous or not, they might come true.

Ol’ Scott “Philosophers using hypothetical scenarios to highlight salient considerations in moral debate is weird” Alexander is at it again!.

[Russell’s book could be read by my mom.] [140 more words…] As such, it fulfills its artifact role with flying colors.

Is it so hard to write “this is an accessible, well-argued book”? Apparently, yes.

> Is it so hard to write "this is an accessible, well-argued book"? Apparently, yes. If only somebody had written an article on not writing like a weirdo. A well, [sucks nobody has](https://slatestarcodex.com/2019/07/04/style-guide-not-sounding-like-an-evil-robot/). (One of the things which really annoys me about ssc, all these nice rules and guidelines, and methods, but never using them in a consistent matter. Note the lack of epistemic status at the start of the article again for example).
Because the rules and guidelines are invented (and re-visited) on an as-needed basis to lead to whatever conclusion Scooter is aiming for at the time.
Yes, a good example is this actual article, where Scott doesn't steelman the anti AGI standpoint (You can't compare GAN networks which generate things, or AI which can do chess very well to human intelligence), but just lifts up the sophistic (What about the apes, gotcha!) argument of Russel to a good thing. E: also why do they think the control problem is solved by having the AGI look for the 'real hidden reason' for peoples commands? This will fail in the same way as the 'just follow commands' way. I don't see how this fixes things at all. (Esp when the AGI bumps into suicidal depressed people, and people who want to euthanize themselves).
> Ol' Scott "Philosophers using hypothetical scenarios to highlight salient considerations in moral debate is weird" Alexander is at it again!. Interesting that this is the same guy who wrote [this](https://slatestarcodex.com/2015/03/26/high-energy-ethics/)
> Ol' Scott "Philosophers using hypothetical scenarios to highlight salient considerations in moral debate is weird" Alexander is at it again!. Come on, most people *do* consider this weird. Hell most people don't even read anything nonfiction if they read books at all.
That's fair - SA was, in his own weird way, trying to draw a contrast between the accessibility of the two books. It just comes across as him saying "why would this person write like this, so strange!", when the writing in question is (based on SA's own description) extremely normal and conventional in its specific context. Moreover, I assume that it's a context he's familiar with, which makes the whole thing feel particularly stilted. [edit] Thanks to /u/thetimujin who points out that SA literally wrote [this](https://slatestarcodex.com/2015/03/26/high-energy-ethics/).
Most people don't write, either
There is nothing weird about pushing (or not pushing) fat people in front of trolleys!