r/SneerClub archives
newest
bestest
longest
Reasonable sub for talking about the merits of AI x-risk arguments? (https://www.reddit.com/r/SneerClub/comments/139b14s/reasonable_sub_for_talking_about_the_merits_of_ai/)
0

I know this isn’t the appropriate place to be taking rationalist topics seriously and arguing about them so I was wondering if anyone knows of a non-rationalist-infested place – ideally one that’s also skeptical of rats like you guys are – that is intended as a space for conversation about AI x-risk (not acausal robot god lunacy, just catastrophic or world-ending scenarios).

Tik tok, I hear the millenarians love it

r/Futurology and r/singularity are both at least moderately skeptical of the AI doomerism, but they do tend to buy into the AI hype. The topic does get a mixed response whenever AI doomerism gets brought up more mundane and AI ethics AI safety topics get brought up and discussed also.

Lesswrong itself does get the occasional skeptical post… but I’m not sure how long that will last before Eliezer manages to drive into full doomer groupthink…

You can’t predict a game of pinball about how chaos theory puts hard limits on the ability of computational power and intellect to predict reality

Contra Yudkowsky on Doom from Foom #2 has some detailed calculations that suggest the human brain is a bit above any limits (in power used, volume used, and efficiency) and we are likely to achieve in computers under our current paradigm (ie without inventing really esoteric stuff like reversible computation), and thus AI likely can’t beat us through cheaper/better hardware and likely can’t easily beat us on software either.

Brain Efficiency: much more than you wanted to know by the same author has even more detail on this topic.

grey goo is unlikely debunks the Drexler-style nanotech most of Eliezer’s most extreme instant doom scenarios rely on.

and here’s a post on Eliezer’s history of failed predictions kinda redundant with this subreddit as a topic, but if you want a mockery free extra long summary because you haven’t shaken the notion Eliezer might know what’s he talking about because he’s spent two decades researching thinking wildly speculating then this is a good source.

r/ControlProblem just about completely buys the doomerism… I don’t know how open they are to skepticism.

Thanks. Yeah, I've read some of those LW posts. Was there ever a period where Eliezer got treated like a crank in that community?
Nope. He transitioned between ideas in a way that didn’t draw attention to the parallels between his beliefs in AI doom and, for example, nanotech doom.
You also might try /r/machinelearning but I think they are much more grounded in what’s actually possible in the present and they won’t enjoy discussing either wild hype or alarmist doomerism.

Try /r/MachineLearning. You want people who actually understand AI instead of famous rationalists pulling things out of their asses.

e.g. https://www.reddit.com/r/MachineLearning/comments/11ada91/d_to_the_ml_researchers_and_practitioners_here_do/

Fwiw (as someone who’s studied it and was very close to the field till recently), I don’t think people who work in AI are necessarily equipped to understand all of the risks or to make definitive answers around complex questions. What I think they are equipped to do, though, is to provide technical information about how the technology functions, and to correct pseudoscience/misinformation. Reason I make this distinction is that I think sometimes “leave it to the experts” can go too far in that direction, where they’re trusted to provide answers which lie beyond the scope of their field, if that makes sense. But equally, not-having many experts is detrimental too because it results in imprecision and pseudoscience (and id say many rationalist fields tend to have philosophers, rather than computer scientists, in). I guess I’d say you need a range of people with interdisciplinary skills. But it’s also crucial people stay in their lane. If tech people provide details about the functionalities, then maybe political scientists and economists can play a role in predicting what that means. I think “lay people” can play a role in this too cos at the end of the day politics and economics are things which affect everyone and so we all have a stake in it. So yeah. Tech people can give insight to (for example), “can an AI be used to hack a system?”. Cyber security experts can answer “could this hacking penetrate critical infrastructure systems?” Political scientists can answer, “what would happen if there was an arms race from nation states to destroy each other’s infrastructure using AI + hacking? How likely is this to occur?”, etc. And they are certainly more qualified to answer this than a computer scientist is. So yeah, I think it’s a conversation for everyone. It’s more that the scientific aspects need to be respected. @OP, my advice would be to get information about technical details from technical sources, and then to learn more generally about human behaviour from experts in the field plus your own observations/opinions. And just carve your own path. Be conscious about what you don’t know. Don’t get too big for your boots. And lead yourself rather than follow others, because chatting shit about things they don’t know about seems to be a natural human tendency across many fields of study, sadly.
They are worried about it. Read the comments. A lot of posters on lesswrong talk about working on AI.
Some are, to an extent. Most aren't.
No they are literally working for them. I’ve read many posts where the person prefaces with saying they work on AI https://www.lesswrong.com/posts/7jn5aDadcMH6sFeJe/why-i-m-joining-anthropic explicit example

There have been some sort of related places to lesswrong/slatestarcodex which tried to recreate ‘debate with each other, but without all the racism’ places (so a less vile /themotte). Not sure if they are still around, but they might know more.

Why, though? There’s an infinite number of more worthwhile things to talk about.

That's probably true but I find it plausible that there's some substantial catastrophic risk in developing AI. I want to go talk to some people interested in this and in the course of doing so suss out how plausible this actually is. I don't really want to talk to rationalists though because many of them have seemed to me psychologically deficient in various respects relevant to their ability to form good conclusions about AI risk and the probability of apocalyptic scenarios lol

Fwiw (as someone who’s studied it and was very close to the field till recently), I don’t think people who work in AI are necessarily equipped to understand all of the risks or to make definitive answers around complex questions.

What I think they are equipped to do, though, is to provide technical information about how the technology functions, and to correct pseudoscience/misinformation.

Reason I make this distinction is that I think sometimes “leave it to the experts” can go too far in that direction, where they’re trusted to provide answers which lie beyond the scope of their field, if that makes sense. But equally, not-having many experts is detrimental too because it results in imprecision and pseudoscience (and id say many rationalist fields tend to have philosophers, rather than computer scientists, in).

I guess I’d say you need a range of people with interdisciplinary skills. But it’s also crucial people stay in their lane. If tech people provide details about the functionalities, then maybe political scientists and economists can play a role in predicting what that means. I think “lay people” can play a role in this too cos at the end of the day politics and economics are things which affect everyone and so we all have a stake in it.

So yeah. Tech people can give insight to (for example), “can an AI be used to hack a system?”. Cyber security experts can answer “could this hacking penetrate critical infrastructure systems?” Political scientists can answer, “what would happen if there was an arms race from nation states to destroy each other’s infrastructure using AI + hacking? How likely is this to occur?”, etc. And they are certainly more qualified to answer this than a computer scientist is.

So yeah, I think it’s a conversation for everyone. It’s more that the scientific aspects need to be respected.

@OP, my advice would be to get information about technical details from technical sources, and then to learn more generally about human behaviour from experts in the field plus your own observations/opinions. And just carve your own path. Be conscious about what you don’t know. Don’t get too big for your boots. And lead yourself rather than follow others, because chatting shit about things they don’t know about seems to be a natural human tendency across many fields of study, sadly.