r/SneerClub archives
newest
bestest
longest
46

EY on Twitter the other day: > The sheer breathtaking illiteracy of those who think that movies like Terminator or The Matrix had any noticeable historical role in developing ideas about AI risk or AI ruin! Asimov, Vinge, Campbell; IJ Good, Moravec, Drexler… go read a damn book!

Historically speaking, noted future perv Isaac Asimov and noted colossal racist John W. Campbell invented the Three Laws of Robotics expressly in reaction to pop-culture depictions of robots running amok and destroying their creators.

Of course, a few days later, EY is on Twitter again, referencing 2001: A Space Odyssey. I wonder how many “AI risk” speculations have made reference to HAL 9000?

[deleted]

Oh. It's *that guy*. His blog reads like the crap I wrote when I was 16 and thought I was deep for knowing who Nietzche was. >Suppose a boy of 9 years, who has tested at IQ 120 on the Wechsler-Bellvue, is threatened by a lead-heavy environment or a brain disease which will, if unchecked, gradually reduce his IQ to 110. I reply that it is a good thing to save him from this threat. If you have a logical turn of mind, you are bound to ask whether this is a special case of a general ethical principle saying that intelligence is precious. Now the boy’s sister, as it happens, currently has an IQ of 110. If the technology were available to gradually raise her IQ to 120, without negative side effects, would you judge it good to do so? >Well, of course. Why not? It’s not a trick question. Either it’s better to have an IQ of 110 than 120, in which case we should strive to decrease IQs of 120 to 110. Or it’s better to have an IQ of 120 than 110, in which case we should raise the sister’s IQ if possible. As far as I can see, the obvious answer is the correct one. From his essay on transhumanism.
Rationalism is when number go up
>Rationalism is when number go up ooh, it's a r/Buttcoin r/SneerClub all-star team-up!! where is u/DGerard!!
IQ goes brrrrrr
[deleted]
Oh, really? I'm new to this sub, didn't realize that!
Yeah, he created lesswrong basically, and lesswrong is a cult incubator which created stuff like slatestarcodex and its rabid fanbase, which created the motte and its racist/radicalizing fanbase. LW also was big in the creation of EA (effective altruism). And there are various other cult like group homes and organizations which have spawned from it. The worrying thing is that some billionaires also listen to this crowd or the various offschoots. One of the related (but not totally created by LW (as there are also a lot of non LW big name people involved, even if it has a big EA), more popularized by it ideas behind it is longtermism, a thing [xriskology](https://twitter.com/xriskology) has been [warning people about which has been getting some media attention](https://www.salon.com/2022/08/20/understanding-longtermism-why-this-suddenly-influential-philosophy-is-so/). (I spend too many words here on EA/Longtermism, but don't worry there is a lot more sneerworthy which isn't EA in the LW community, for example David Aurini (hbomberguys 'skull guy') used to be a LW member, and slatestarcodex had (negative but still taking the guy seriously and bringing attention to him (which considering his 'we should protect free thinkers from the wokes' undertone is worrying)) [an article](https://slatestarcodex.com/2014/08/20/ozys-anti-heartiste-faq/) about the neo-nazi pickupartist heartiste). Choice quote from Scott in reaction to the heartiste post: > Every time I publish something criticizing the social justice movement, I briefly consider my own mortality. But I figure the manosphere is less of a worry. It’s not just that I’ve had generally good experiences with (an admittedly carefully selected sample of) them. Nice one Scott. (Tbf, in 2014 how big of a racist neo-nazi type heartiste was might not have been totally clear, but he still was a huge rapist trainer, and sexist. But don't worry the pua's were nice to Scott (A recurring theme in Rationalism land 'but they are nice to me' means they are nice people)).
I’m here just waiting for the day someone to explain me why lesswrong/slatestarcodex nerds think themselves as “intelligent” folks trying to discuss interesting things etc but in a very weird way, like, what’s wrong with these fellas
Reading to much fantasy books and science fiction books when young (or too much science fiction movies (my headcanon is that Musk is trying to recreate total recall. [Self driving cars](https://www.youtube.com/watch?v=eWgrvNHjKkY) (OW GOD never noticed all the cars look like the ugly cybertrucks))).
THEY DO OMG MUSK YOU CHEAP CORNY BASTARD

Bet he wishes there was a non-fiction book about ai risk that one could read.

At first I was gonna make the joke “academic books usually at least have *bad* data”, then I realised I read you wrong, and now I’m kind of delighted that his shit is so toxic there isn’t even an airport non-fic book out, you have to get the watered down version from MacAskill instead

Vernor Vinge seems to have had little public presence over the past 10 years, and by golly do I want to know what he thinks about a lot of this stuff.

I just want another Zones of Thought book :(

Damn, didn’t know that about Asimov. You could tell he wasn’t the greatest understander of women, but sexual harassment? What the fuck.

I remember somebody once talking about how The End of Eternity was one of their favorite sci-fi novels ever, so I read it, and some of the timeline manipulation stuff is fine I guess, but the "romance" aspect is literally just "I fucked this chick one time so I'm now forever in love with her." It struck me about two thirds of the way through the novel that these two people never had, like, a *conversation*. This was fairly typical of a lot of early male sci-fi writers sadly.
Asimov was *laughably* bad at writing people in general and women especially.
If you're curious, there's actually an entire section of his Wikipedia page under "views" -> "sexual harassment" about his absolutely abhorrent treatment of women. It used to be a large, prominent section of his bio since it was so prevalent, but the wayback machine says it was downgraded in 2021. My favorite story (read: one of the most laughably terrible) is that at one point he was SO well known for groping women that he was invited to run a panel about how to properly do it. Asimov declined to do, but his reasoning provided for declining was was this would be run at a convention, and the women provided as volunteers for demonstration wouldn't be attractive enough.
Yeah, that story is linked in the OP. :/
Ah, shame on me for not clicking through. I find stuff like this really tough to take given how widespread it was and still is with nary a thing done about it. Just a reminder of how unfriendly these spaces can be, I guess.
Take a look at what his son was up to if you really want to have a bad day.
[removed]
>he died of AIDS from tainted blood. I dunno if that counts as karma. What a completely fuckedup thing to say.
> [the tainted blood shit in your comment] No.

lmao Drexler. Is Smalley not worth reading? When has Yud referenced Freitas?

😂

Yud weighs in on “sheer breathtaking illiteracy,” a subject he knows quite a bit about

“does everyone understand im an annoying nerd yet????”

Did he only watch the movie I, robot or something?

And him not referring to the Hyperion cantos just shows how badly read he is. ;) (yes, I smugly dropped in a book reference in reaction to his smug book post tweet).

**[Hyperion Cantos](https://en.wikipedia.org/wiki/Hyperion_Cantos)** >The Hyperion Cantos is a series of science fiction novels by Dan Simmons. The title was originally used for the collection of the first pair of books in the series, Hyperion and The Fall of Hyperion, and later came to refer to the overall storyline, including Endymion, The Rise of Endymion, and a number of short stories. More narrowly, inside the fictional storyline, after the first volume, the Hyperion Cantos is an epic poem written by the character Martin Silenus covering in verse form the events of the first two books. ^([ )[^(F.A.Q)](https://www.reddit.com/r/WikiSummarizer/wiki/index#wiki_f.a.q)^( | )[^(Opt Out)](https://reddit.com/message/compose?to=WikiSummarizerBot&message=OptOut&subject=OptOut)^( | )[^(Opt Out Of Subreddit)](https://np.reddit.com/r/SneerClub/about/banned)^( | )[^(GitHub)](https://github.com/Sujal-7/WikiSummarizerBot)^( ] Downvote to remove | v1.5)
Isn't that the author who went full post-911 brain eater?
Did he? I wouldnt know
He went full muslim-mexicans are trying to destroy the use by fake climate change. Brain eater, as said.
Ah that sucks. Brain eater indeed.

HAL 9000 is actually an interesting case because it’s not just some AI that went berserk or decided it hated humans. It turned “evil” because of mishandling by people who didn’t understand it - HAL was ordered to keep the nature of the mission secret from the crew, a task that became increasingly impossible as they approached Jupiter/Saturn.

I feel like if we get murdered by AI it’ll be because we gave it an instruction without considering the possible implications. Something like:

“Hey AI, make sure the prairie dog population in South Dakota doesn’t get too high.”

“Will do! Bathing them in nuclear fire sound good?”

“Wait no-”

I mean, that's already the trope, right? An AI told to make paperclips eventually turns the whole universe into paperclips. Also like, the trope of "AI told to reduce crime decides to kill all humans, who create 100% of all crime"
Somebody on twitter who does a lot of machine learning stuff had a great rebuttal to the "paperclip AI turns the world into paperclips" idea, which was that if the AI's reward function is that strong, and it is smart enough to figure out how to take over the world, it will instead just figure out how to fake its own "made a paperclip" feedback and then masturbate until the sun explodes.
I don't think that's something you can really know beforehand, sometimes reward functions bug out hilariously, sometimes they don't. I agree that it's more likely, though. ML is hilariously good at exploiting reward functions. [This list](https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml) makes for some of the funniest reading.
I think the fact that this is explicitly supposed to be an "artificial general intelligence" that can creatively manipulate geopolitics and conquer the whole planet to make paperclips vastly increases the likelihood that it will be able to outwit whoever wrote its reward function.
You’re not lying, that list produced some straight on belly laughs. I especially liked the simulated organism which played dead and the backwards driving roomba.
Turns out the AGI was the most human of all of us. (jesus typing these reactions with one hand is hard work).
Doesn't work. The machine would still be logically required to turn the universe into computronium (if not paperclips) to try and figure out if there's some mathematical trick which would enable it to increment it's paperclip feedback _faster_ and/or to transinfinite numbers.
It just has to find the "I am doing as much as possible to produce paperclips and any alternative policy would most likely result in producing less paperclips" state indicator and figure out how to hold it at 1. It's a physical machine; it doesn't have a paperclip oracle in its reward function, and the machine chases the reward, not the paperclips. If it's an AGI, it will figure out that the reward is what feels good, not the paperclips, and figure out how to wirehead itself.
It's worth noting that HAL 9000 was thought up with input from artificial intelligence expert Marvin Minsky, and reflects what he really thought the state of AI was going to be in 2001. But at the time the novel/script was written, they still thought classical planners were the future of AI. With the insight gained from decades of work on AI, it's hard to see how HAL could be useful at all if that level of contradiction in his instructions was enough to turn him homicidal. Even a robot built to pick up dog poops would need better common sense judgment than that.
True, but a key point in my view is that an artificial intelligence doesn't value the same things we value unless we *tell* it to. Making assumptions that it will value human life, rather than being neutral on the subject, was what got HAL's handlers in trouble. That said, it would seemingly have been pretty easy to solve with a series of Three Laws style prioritized directives like 1. Keep the crew alive. 2. Don't tell them about the alien thing. 3. Open the pod bay door upon request.
I think they sort of underestimated the degree to which an AI would have to be able to model a "reasonable human's" utility function just to do even simple tasks. Like, you need to be able to do that to clean a bathroom, much less run a crewed space mission. I feel like this would not be the first incidence of the "attempts to resolve contradictions by killing everyone" bug.

He’s absolutely correct, all these works are irrelevant, the only culturally relevant AI related fiction is Yu-Gi-Oh: VRAINS.

Eliezer Yudkowsky having the audacity to talk about the „breathtaking illiteracy“ of people who ignore history is… exactly what I’d expect from him.

(I’m not actually new to Reddit, I just deleted my old account u/starbuck37.)

wow that Orbit article is deranged

I… think he’s kidding? Maybe?