r/SneerClub archives
newest
bestest
longest
LW commenters are largely supportive of Yudkowsky's call for airstrikes. Best watch out for large shipments of Kool-Aid addressed to Bancroft Way, Berkeley (https://www.lesswrong.com/posts/Aq5X9tapacnk2QGY4/pausing-ai-developments-isn-t-enough-we-need-to-shut-it-all)
50

By far the most alarming comment is this one:

I don’t know how most articles get into that section, but I know, from direct communication with a Time staff writer, that Time reached out and asked for Eliezer to write something for them.

And then they just fucking published it without any editing? What the fuck? I can’t even with this shit.

For bonus points, one poor sop tried to inject some sanity into the situation and got downvoted to oblivion (of course).

Re the poor sop, he clearly should have read the sequences better, intelligence[citation needed] is such a superpower that it can just manipulate all humans into doing its bidding and move behind the scenes, like the agilluminati.
Because remember, nobody but Eliezer has any agency or responsibility for their own actions, since they were either manipulated into it by more intelligent people or manipulate out of it by Eliezer (pbuh)
TIME IDEAS – how does the rest of the magazine feel about this

The comments do have bits of sanity… that are heavily downvoted. One of my favorites, at -53 (because it’s the same phrasing issue I hate):

Eliezer’s repeated claim that we have literally no idea about what goes on in AI because they’re inscrutable piles of numbers is untrue and he must know that. There have been a number of papers and LW posts giving at least partial analysis of neural networks, learning how they work and how to control them at a fine grained level, etc. That he keeps on saying this without caveat casts doubt on his ability or willingness to update on new evidence on this issue.

Of course, I think they actually are inscrutable… to EY.

the lorenz attractor is a chaotic system which is why no one has been able to predict the weather since it was discovered (2 random)
I suspect that the real problem is that EY has already modeled LLMs as sentient human-like agents and therefore is looking for some kind of solution to the same problems solipsists have been noticing for centuries. How can we trust the internal experience of someone else when we can't access it in any meaningful way? Of course, if LLMs aren't sentient and we can understand what they're doing to some extent by looking at the model they ultimately use this isn't an issue beyond "damn, it's really complicated" but that doesn't get Elon and friends to give you that sweet anti-skynet money.
That's a very generous theory that gives Yudkowsky way too much credit. The problem is the Yudkowsky actually doesn't know enough to be aware that there could be a distinction between a sophisticated language model and, say, the human mind. He doesn't know anything about how either of those things works. It's like if Yud was worried about a particle accelerator accidentally destroying the world and you said "ah well he's just made the mistake of assuming that it will cause a nuclear explosion, when actually it only causes harmless nuclear interactions". No no no - the truth is just that he remembers from reading science fiction books in his youth that big science things can make stuff go boom, which is scary and bad.
[deleted]
*I don't know.* I suspect that a hypothetical Yudkowsky who is not consumed by millenarian doomsaying of some form or other would be unrecognizable to us as the same person, though. Like it just seems implausible to me that he *could* believe that the world is basically fine and that he doesn't have a unique role to play in saving it from certain destruction. The particular details of the doom are probably unimportant.
[deleted]
I'm reminded of some of the serious-minded posts from Siskind about lying about your beliefs in order to advance them. In retrospect it's incredibly obvious that the ideology he was trying to advance was textbook neoreaction, but given Yud's obsession with how hard it is to properly define someone's objectives it wouldn't surprise me at all to find that he's not even at Joseph Smith levels of belief in his own shit and has gone straight to L Ron Hubbard.
I definitely can't rule out the Joseph Smith hypothesis, I just figure it's less likely based on the fact that he genuinely doesn't seem to be very smart. Lying all the time is really hard because it requires that one attempt to simultaneously understand the truth and also construct plausible-sounding facts that contradict it. Actually believing your own bullshit is, by comparison, much easier. I think it's possible to be a bad faith actor (in the sense of being truly uninterested in other people's needs and perspectives) and to believe your own bullshit at the same time. In a lot of ways the two things are synergistic, really.
[deleted]
Presumably Joseph Smith didn't believe he was actually a prophet of god though? Like, he actually did write those tablets himself and then tried to pass them off as having supernatural origins. Whereas it seems entirely plausible, and indeed likely (to me) that Yudkowsky genuinely believes that he has unique insight into threats that AI poses to the world. Those insights might come from science fiction novels, but still.
[deleted]
That's a fascinating theory and it never occurred to me. So you think that maybe he genuinely believes that he's a historic genius of some sort with respect to AI, but that he is also lying about the dangers of it because he resents being dismissed and ignored? That would be a truly incredible russian nesting doll of derangement.
I don't like the guy but I'd consider Yud reasonably smart. It's more that his arrogance and self-absorption are constantly writing checks that his intellect/subject expertise (such as it is) can't cash.
I submit that we have no evidence that suggests that he is unusually intelligent, and quite a lot of evidence that suggests the opposite.
He's not some generational intellect, but does seem capable of absorbing and expressing complex ideas. With humility and self-awareness instead of coasting on the glib sophistry of his woefully incomplete knowledge, perhaps he'd have been something more. Probably still a windbag.
> does seem capable of absorbing and expressing complex ideas I honestly am not aware of any evidence to support that. Smart people usually take complicated ideas and explain them simply but accurately. Yudkowsky does exactly the opposite: he explains even simple ideas with excessive length and often inaccurately.
I'm willing to believe he's good at puzzles based on random anecdotes, even those from him. That isn't worth a lick of salt, though.
he's a bright guy, he really really really should have gone to college
> I think it's possible to be a bad faith actor (in the sense of being truly uninterested in other people's needs and perspectives) and to believe your own bullshit at the same time. In a lot of ways the two things are synergistic, really. Honestly I suspect this is the more common case -- it can be reassuring to think people are scheming machiavellians who are trying to plot how to take you down, but more often the problem is they just *don't care* about the harm they cause to others
[deleted]
It's certainly not a lack of familiarity, that's for sure. I'm not saying he's secretly a good person, I just think he's also demonstrably a huge idiot with a massively-inflated sense of his own intelligence and accomplishments, so the likelihood that he really does believe the things he says seems higher to me than the likelihood that he's a schemer making it all up to trick gullible people. Doesn't mean he can't also be an asshole or a cult leader or have done terrible things to people. That was sort of my point -- you don't need to have some secret evil lurking in your heart in order to do harm, but the harm you may do is real nonetheless.
[deleted]
Yes, that is the point I am making
[deleted]
(edit: sorry this post was me being an asshole)
[deleted]
I'm not calling him dumb to excuse him, I'm calling him dumb because it's true. Dumb people can still be capable of bad things, especially dumb people who are convinced they're the most smarterest person in the whole wide world like EY is. I'm pretty sure you and I agree on this, so I'm not sure where this conversation is going wrong.
[deleted]
No problem, that's understandable and you're not wrong there. I just think with regards to the AI-apocalypse stuff it is genuinely a case of him not understanding how these things work and getting spooked, rather than cynically using it to gain power/influence/whatever. But that doesn't mean he's not also a power-hungry weirdo who does bad things.
I mean, pre recent AI advancement he just believed that the AI was going nowhere because they aren't as smart as he is (just as all of that advancement was already in place, except without anyone dropping millions of dollars on GPU compute) and he had a plenty of time to play his unique role in saving the world. Then along with the rubes he was bedazzled by this openai crap, which is a whole lot less ground breaking than it is just using more compute and more training data.
Selling doom is basically his ticket to living comfortably without having to have a real job. There’s no chance in hell he’d stop.
I think you and I had the same basic idea of what his thought process is here, but I fell into ratspeak about modeling to (unthinkingly) make it seem more intricate or credible than it entirely deserves. Your way of describing it fits better though.

and there’s an outburst of IQ eugenics in the comments, because of course there is

can't believe he actually said "bell curve" and brought up malthus like cmon at least try to be not hitler
As well as some classic "men and women are different species" arguments for flavor-- All the classics coming back, is this the season finale?
The eugenics comment thread has some twisted argument… arguing whether dysgenic selection is a thing, then if it’s actually a good thing because it will slow AI research. They’ve somehow managed to make a discussion even worse than normal eugenics promotion.
Internal contradiction here. Why did a society without their IQ eugenics generate and produce the super intelligence they fear? They will say, "only the smartest contributed to that!", and then quickly stuff their face of cheese burger, type furious texts on their iPhone, and stomp off smugly like a child.

“Someone else should die for this” is such a consistent through line in reactionary politics.

do these people really believe that, pre-AI, we live(d) in some kind of folksy world where individual human motivations drive society and economics? its so wild to me how these people seemingly put up with globalization, financialization and psychotic technocrat billionaires but its chatbots that push them over the edge.

sidepoint: has anyone come across STS research involving ea/rationalists? really curious if anyone is seriously looking into them from that perspective.

Be careful. These people will also easily circle around this argument and use the fears we have about globalization and capitalism to drive this fear too. The fundamental problem is fixation on the unknown. Most people accept that systems of society are so big they only have a limited model of what is really happening and then just accept that. But if you press someone hard to the realization that they really don't know or have control over the world, you can insert any magical demonic super power you wish in the gaps. This is very similar to QAnon and it's cult rationalization process.
oh i agree absolutely, this is what makes rationalists so insidious. they target people who are (more or less) intelligent and who probably would not consider themselves conservative - yet the ideological outcome has a lot of overlap with the far right.
Are you trying to say capitalism/corporations is the ai? Are you some sort of commie?
the world was totally perfect until some bastard doomed us with linear algebra
Somebody made up imaginary numbers and math where 2+2=5 and it all went downhill from there.
WHY WON'T SCHOOL TEACH US HOW TO DO OUR TAXES INSTEAD OF NUMBERS THAT DON'T EXIST angry face
[Stop doing math!](https://i.kym-cdn.com/photos/images/original/002/029/841/a26.png)
Clearly you need study Norbert Wiener. "There is no homeostasis whatsoever. We are in the business of cycles of booms and failure, in the successions of dictatorship and revolution, in wars which everyone loses, which are so real a feature of modern times."
I was elected to sneer not to read.
TO TRULY SNEER WE MUST TRANSCEND THE UNITY OF OPPOSITES

Breaking: A rationalist suicide bomber plot was foiled at openAI’s HQ today, the police could easily apprehend the terrorist as he insisted on reading out loud several pages of incomprehensible literature before he would blow the trigger.

Later research by the police also showed there was no risk of a detonation as the terrorist seemed to have not assembled the explosive device correctly.

Four policemen died of Fentanyl overdose during this arrest. Biden vowed to increase their budget. While the GOP is is creating a bill to stop the constant defunding of the police by the democrats.

The madman was heard shouting, "Readjust your priors, people!" before being taken away.

So here’s my question(s) about all this.

When I was a kid, we had text programs we could use on our Texas Instruments computer that would “talk” to us depending upon what we said to it. It was fun. It was stupid. The software was doing nothing more than accessing the library it had for text and prompts to spit out what looked like human responses.

Currently we have software that does – to my slow and rusty mind – functionally the same thing. Only instead of using its own cartridge-based library, it uses the internet. And it does images in addition to text. Great. Okay.

So honestly, what’s the difference? How is that “AI”? It’s looking at stuff we already have, collecting and collating it. Big fucking deal. It’s not telling us or showing us anything new. That isn’t intelligence, it’s just basic programming. Maybe this is a juvenile question to be asking this sub, but it’s been on my mind a lot lately.

And also, how much of this hysteria is being exacerbated / exploited by forces who see ways of personally gaining from it, versus how much of it is genuine concern?

It's, complicated. Honestly, this gen of AI is fundamentally different because it does, empirically, generate some models of the things you input, without (as much) supervision as before. Downplaying the advancements in AI plays into their hands because they get to circle you about one thing they know something about. The thing is, the topic of sentient AI I believe has a visceral and emotional impact registering as threat to people. I believe that is expected. On top of that, is people who take advantage of that by greatly misrepresenting what the tool is, capable of, and how fragile or durable our meat space is, for the purpose of hysteria and manipulation. It works, unfortunately.
My layperson's understanding of "sentient AI", or computers working like human brains, is that we're still thousands of years away from any such thing, just because brains are so incredibly complicated (and are not, despite the way they are compared, machines). Am I totally wrong about that?
Short answer yes you are wrong. Definitely not thousands of years. It’s hard to say (obviously). A couple of ways of estimating it: Neural modeling has gone from incomplete models of flatworms (with hundred of neurons) to complete models of flat worms to incomplete models of fruit flys (tens of thousands of neurons). If the exponential trend actually held (although it probably won’t as any number of factors in it may break down) complete modeling of a human brain (80 billion neurons) might only be decades away. The lowest estimates of computational power of a human brain (just looking at total amount of spiking and timing of spiking) can be matched by the largest supercomputers. Higher estimates are hundred or thousands or tens thousands of times higher, which would be expensive and impractical but not fundamentally unreachable. The early stages of the visual system (V1) are well understood (we know how it works and why it works, on the level of individual neurons, in terms of neuroanatomy and in terms of overall role in cognition) with lots of plausible hypotheses about the later stages of vision and some general understanding of the functional neuroanatomy. Memory has a lot of pieces partially understood…. Overall I wouldn’t be surprised if a few key canonical microcircuits get characterized and we jump ahead massively in understanding of the brain. This would likely also lead to matching insights in AI. I also wouldn’t be surprised if there aren’t any convenient breakthroughs like that and it takes centuries more of patiently decoding different parts of the brain. But either way not thousands of years As to the machine/brain comparisons and analogies… sneerclub likes to rag on it (especially when they are pushed in absurd directions), but more moderate comparisons have been a driving idea on both sides, computational neuroscience and machine learning have drawn inspiration both ways. It matters whether simpler (and easier to compute) or more complex models are sufficient to describe the cognitively important aspects of neurons, but it isn’t a question of whether you could do it in principle. (In practice, if every detail of a neuron is cognitively important overall down to individual molecules, it would be too difficult to simulate in sufficient detail).
I thought the "uploaded" c elegans still didn't swim properly, but of course we could always cheat and train a neural network de-novo worm to do what a real worm does, at least to the extent of our ability to tell what it is that the worm does.
Oh god that poor worm.
Mostly the issue is that AFAIK we don't even know for most of the junctions if they are excitatory or inhibitory. It's like you got an electronic schematics but it's just all little boxes that wires go into, and you don't know what electrical component each box corresponds to, let alone the properties of that component.
All I want to say here, it doesn't matter that much. It's sufficiently useful and clearly going to be more useful. It doesn't even have to be sentient or a brain to be useful or get better. It's totally fine to say it is not a human brain, but don't count on it not being important or impactful. It's also ok to admit that it does make us uncomfortable because it's useful interpolation of language and vision are uncanny and likely, again, to be more impactful soon. Admitting that shouldn't be a threat to either of us.
It's not so much that we are thousands of years away from sentient AI as we're not even working on it. We're breeding a better horse rather than a building a racecar, and horse breeding probably won't help us get the racecar, except maybe somewhat indirectly. (arguably it's actually the reverse since horses are more complicated than racecars, but eh, you get the point about separate paths and such)
They might not work like human brains, but they are "better" than many humans at lots of things, though worse than many humans at lots of others. I think the question of sentience and the distinction between animals and machines is somewhat academic. They are accessing and extending our collective biological intelligence, after a fashion, via everything that's on the internet.
You are not wrong.
There is, mathematically speaking, no difference whatsoever between "actual thought" and "using a giant lookup table to match inputs with outputs". What's tripping people up is that the size of the lookup table that is necessary for creating a plausible simulacrum of a person might be a lot smaller than we expected. There's a cultural conceit that the human mind is so mysterious and complex that its workings cannot be accurately summarized by a giant pile of text data, but things like ChatGPT are empirical evidence that that is not as true as many people previously assumed.
It's an absurdly powerful template engine. And an absurdly powerful pattern matcher. And that's enough for it to do some really weird shit. Software using real, honest-to-God natural language as any kind of command language is a complete game-changer. That's *never* happened before outside of science fiction.
[deleted]
those scare quotes are doing 200% of the work in this comment
[deleted]
> not to scare https://en.wiktionary.org/wiki/scare_quote
I think you're still overstating the matter. There's no plausible case for considering AI to be a unique threat to the safety of humanity. All of the current talk regarding "AI safety" is hysteria.
[deleted]
I mean, you *could* use AI for that purpose, but whether or not it would work a lot better than current methods of radicalization is an open question at best. I don't think it's reasonable to call such a scenario catastrophic; that implies that you are more certain about the outcome of such an eventuality than anyone possibly can be. All the AI doom scenarios are like this. The all have the form of "what if AI gets used for X bad thing and it's a billion times more effective than previous methods of doing X?". Like, sure, that's possible, but it's not plausible. And it completely ignores the other side of the coin: "what if AI gets used for Y good thing in response to X bad thing, and it's a billion times more effective than previous methods of doing Y?" That knife cuts both ways.

anyone bothered to contact the time to clue them in regarding big yud, he is no expert on anything.

Don’t worry y’all these are just conditional calls for violence, totally different than calls for violence. LOL

So if eliezer was serious about AI risk why would he be writing this in Time magazine instead of planning coordinated Durdenesque direct action against the data centers?

eliezer is an orgasm denier, not a fighter

I thought you weren’t supposed to talk shit about how you’re gonna be a dick to AI incase you convince it turn into AM from I Have No Mouth and I Must Scream and starts torturing people? Now you’re going on about bombing baby AI?

The AI companies seem focused on forms of “safety” centered around keeping their chatbots for regurgitating politically incorrect views, while ignoring the real “AI safety risk”: that mentally unwell people will believe all the AI hype and take it upon themselves to stop the machine apocalypse at any cost.

Members of Yud’s cult have already been involved in at least one murder and that was just over demands that they pay rent. Who knows they’d do to “save the universe”. Surely someone who thinks nuclear war is an acceptable response isn’t going to shy away from a few car bombs.

If I worked for an AI company I’d be afraid for my life.

Look I only think it’s a 1% risk of annihilation but we need to risk it because of the potential to cure mortality! > […] my assessment of the risk of ruin is something like 1%, not 10%, let alone the 50%+ that Yudkowsky et al. believe. Moreover, restrictive AI regimes […] may well be a delay in life extension timelines by years if not decades that results in 100Ms-1Bs of avoidable deaths (this is not just my supposition, but that of Aubrey de Grey as well, who has recently commented on Twitter that AI is already bringing LEV timelines forwards)

Thankfully AGI isn’t going to kill us all so it’s a moot point but it’s very amusing that sci-fi immortality tech is a valid reason to risk extinction

EDIT: oh and later on in the comment it’s about how we could use AGI for genetic enhancements to IQ because of course lmfao