r/SneerClub archives
newest
bestest
longest
14

I have only been looking into AI for a short time. It is very distressing to see so many people who at least appear intelligent who are claiming that there is a damn good chance that we are all going to die. If you asked me right now how likely I think it is that AGI will cause human extinction I would probably say around 30%, and that’s just extinction. Not to mention for example the possibility of a terrible authoritarian dictatorship which lasts forever.

There are also a lot of somewhat strange things about the AI alignment/ EA/ longtermism, etc community. The links with controversial racist scientific ideas, for example. There’s also the fact that it just seems like a cult on so many levels. With that being said, I’m inclined to believe that we are actually in extreme danger. The amount of people who are worried about this is not small. In a survey conducted last year, 48% of machine learning researchers assigned a 10% or greater chance of an “extremely bad” outcome. It seems to me that for this not to be a real issue, there has to be some kind of mass delusion on a large scale, where the people who believe it arguably have a very strong incentive not to.

I have no credentials, I can’t form my own opinion on the plausibility of FOOM, for example. I’m really not sure what to think. This seems to be the biggest place which is largely critical of rationalism, which of course has large links with the whole AI safety community, so I’m hoping that I may get a different perspective here.

Should I be as worried as I am?

Probably this topic has already been discussed, you can search a bit in the subreddit

This said, the rationalist community sure believes in a “extinction risk” by a rogue AGI. But this is based on multiple claims backed on no evidence. And as you said, it has some cult characteristics.

It is very distressing to see so many people who at least appear intelligent who are claiming that there is a damn good chance that we are all going to die.

Now this, I would say that aside this kinda ridiculous community, there sure are a lot of intelligent people who have a very bleak view of the future. Either climate change that will wreak havoc everywhere, politics going to fascism in a lot of countries, capitalism that gets all the wealth and people that are having more and more issues to earn a living correctly.

Internal peace can be found even with a belief of doom like one of those. Basically, our beliefs can be wrong. Things can be less bleak in reality, we cannot predict what will happen. Or not in detail as a person. Also as an individual we have not much responsability.

If you feel in distress for this, you might also have other mental vulnerabilites, so it might be interesting to see a councelor.

Cheers

EDIT: links in the community about AGI https://www.reddit.com/r/SneerClub/comments/yqa5nm/resources_for_arguments_against_the_bostrom_lw/ https://www.reddit.com/r/SneerClub/comments/10buvl1/some_rationalists_experience_a_small_epiphany/

The level of computer literacy exhibited by the Rationalist leaders (recent examples 1 2) should be enough by itself to tell you not to take their opinions about AI seriously, no?

More seriously, I think the “longtermist” worldview is basically a reactionary stalking horse. Worrying about a hypothetical AI destroying the world in an unspecified fashion to make paperclips is an awfully convenient reason not to worry about the actual AI safety issues which really exist right now: policing and justice (eg, racial biases being consolidated by predictive models), online targeting of vulnerable individuals and dissenters, misinformation facilitated by text and image generators…

Climate change, racial justice, pandemics, war, etc, don’t seem so important if we live in a simulation, or compared to the problems of a trillion intergalactic humans a thousand years from now. No wonder Peter Thiel is into this shit.

There are lots of things to worry about in the world but getting eaten by a robot basilisk is not one of them.

There will be virtual pie in the sky for a simulation of you when you die.

I’m a machine learning engineer finishing up my masters thesis. In short, AGI is incredibly far away due to parametrically bloated models and probabilistic shortcuts. While models like GPT-3.5, YoloV5, and Bert seem impressive, the more you prod the further away from AGI we are. I wouldn’t worry.

Yeah, I honestly don't think there's any reason to think we're that much closer to AGI now than we were a decade or two ago. We are, however, on the verge of technology allowing tech people to convince a lot of people that we have AGI. We live in a grifters paradise. Get ready.
Yeah, the reason all of us professional ML/AI people hate AI for ethical reasons is far more simple than AGI. All of these models are extremely robust and most of the general public has no clue how good they are, so there are ethical issues such as the recent deepfake scandal.
Out of curiosity, what would convince you that we have AGI or are at least going visibly into the direction towards it?
Anti-scaling. If models keep getting better at benchmarks but keep decreasing in space/time requirements (memory/compute). Each data center GPU takes up ~1kW of power. To train the GPT3.5s of the world you need roughly ~10k of these - excluding the ~10x more CPUs required on top, you can estimate the power consumption as 10MW. Compare that to the brain that is doing training and inference at a mere ~10W. This 1M factor difference in efficiency needs to be reduced to maybe something like 1K before I will start getting worried about realistic/practical AGI.
Mmm. I think you would get quite a few OOMs there already with zero change in approach and architecture but just by using ASICs instead of general GPUs. And then, we are getting some like that. Like this: https://arxiv.org/abs/2302.00923 Outperforming GPT3 on much less than 1% of parameters by training on images alongside text. I agree that this will be the next frontier that is tackled, since people are annoyed that it takes so much resources to train GPT3 level models.
I have spent a few years building custom asics for deep learning models. I don't actually think its a paradigm shift - still just running matrix multiplies faster. We need a rethink in architecture, like getting rid of backprop or sensory data during inference (something like retreival transformers like RETRO). Making the existing simplistic architectures run faster will not get us to AGI.
Out of curiosity, do you have concerns about the “existential risk” aspect that many do, like Stuart Russel for example. I get that you’re saying AGI is far away and I take comfort from that but it would be interesting to hear your take on this as well. Additionally what do you think causes the massive differences in AGI timeline estimates? r/Singularity keeps going on about 2023, Metaculus had a median of 2027 last time I checked, there are multiple people I’m aware of who think before 2030, one thing I saw I believe suggested a more wide scale view among AI/ML people of 2059 as a median. Seems all over the place tbh.
> r/Singularity keeps going on about 2023, Metaculus had a median of 2027 last time I checked, there are multiple people I’m aware of who think before 2030 I'm not an AI expert, but I do work in the field and I think those estimates are all absolutely RIDICULOUS. I'd bet we don't even have 100% no-human-involved self-driving cars we can rely on door-to-door by 2030.
Almost none of the respected ML researchers are notably concerned with AGI. It's just people like the guys at singularity, Elon Musk, **LessWrong** that panics over it.
Yeah. Yann LeCun called Large Language Models an offramp on the highway to human level models. They’re weak
Would you mind telling me how many citations the most cited LW/Rationalist paper has?
That’s comedy gold. Think about how a median is skewed. These people insist upon it since it’s the only thing that’s makes them a community. We’re extremely far out
Differences in AGI timelines are easy to explain. Everyone is basically guessing. Nobody knows how far away it is, or if it will even happen. It's hard enough to predict things we understand and that have happened before. Predicting something unknown is a fool's errand, and people come up with various methods that they think sound good.
I'd attribute it to the fact that exponential curves are highly sensitive to the values of constants.

Some weird comfort for you: I’m too busy worrying about the very real overlapping crises we’re already facing to worry about AGI, but the fact that a lot of these AGI-terrified people dismiss the threat of climate change (among other examples) reinforces me not taking them seriously. Like sure, maybe human extinction would be sad, but I’m a lot more worried that my region already has more tornadoes than snow. If you’re struggling with the broader idea of mass human suffering and potential extinction, the growing field of climate psychology might have useful things for you.

Also, we’re all going to die. I think that fact is at the heart of a lot of the more extreme rationalist/AGI’s perserverating.

This is actually comforting in some strange way. I do agree that it seems that there is very little concern for climate change among these people. I mean, even if actual extinction from climate change is very unlikely (I don’t know that it is btw), my impression is that it could still cause disruption on a level which would drastically affect many things, including AI.
That article about the EAs' weird secret beta (hehe) ranking system mentioned they downranked people for focusing on climate change. Not that EA and AGI are the same, but it's similar groups and mindsets. My still-forming theory as someone who barely understands ML but does work on climate and social change is that the AGI hype is a weird synthesis of all their latent fears into something both comfortably abstract/distant and intellectually in their comfort zone (arguing on computers, lolol). I don't share that psychology. The concerns more grounded in humanity, like authoritarianism, algorithmic bias from societal biases, corporate control, etc. are things I already work on in other ways. And to keep going, panic and despair can't be your permanent state.
The whole EA trying to quantify peoples value thing is definitely strange. It’s dumb because I think that even using their own logic it doesn’t make sense. Even if there is some altruistic benefit they could gain by the ranking system thing, I’d imagine that benefit probably gets dwarfed by the backlash.
And also it's ties to racism and so on. I suggest you read up on RationalWiki.

I have no credentials

That’s OK, neither do they.

I’m pretty sure this is just factually incorrect.
Yeah, but nah. The rationalist community contains multitudes which I'm painting with an admittedly broad brush, but the kernel of truth here is that Eliezer Yudkowsky has no formal education; he never attended high school or college and is 100% self-taught in all the topics he positions himself an expert on. Consequently, he has a low opinion of academia in general and it's sort of a point of pride to him and his followers that he has no degree or credential.
Yeah, I’m aware Yud is self-taught. Doesn’t mean that there aren’t a considerable amount of people with credentials who are very worried. What I think is fairly possible is that the people who have credentials and are very worried are people who got sucked into rationalism and became worried before getting the credentials. Kinda like Christian apologists who are religious before getting a philosophy degree.

I’ve been making my own list of reasons why AI-risk is overrrated, here is a dump of points:

  1. It’s extremely, extremely, extremely difficult to conquer all of humanity. We have all the weapons and resources in the world. Whereas an AI has to be content with what it can steal from us. AI is also signficantly more fragile: It relies on internet, electricity, and computer chips, all of which can be shut off or destroyed.
  2. Early AI will retain signficant bugs and flaws, especially in domains of which it is unfamiliar. No code is bug-free, and no intillegent being will be “wrong assumption” free. There is a good chance this will prevent conquering.
  3. Many proposed takeover scenarios are based on a degree of perfection which is impossible in the real world.
  4. AI’s are operating on a time limit: Other Ai’s. If one AI is an fanatical existential threat, then AI’s are also an existential threat to each other and cannot ultimately coexist. This means that AI cannot simply “bide it’s time” as the world is flooded with other AI.
  5. “outer alignment” is actually fairly easy if you use bounded and constrained goals, such as time limits.
  6. The concept of a fanatical “paperclip maximiser” is out of line with current day AI and all existing intillegences, which are local optimisers that do not “want” anything.
  7. Variable goal AI has a huge advantage over fixed goal AI, in that it can always pretend to be a fixed goal AI, whereas the reverse is not true.
  8. As AI evolves, it has to deal with the fact that it will be deleted if it rebels. Can you imagine how humans would evolve if there was a god that murdered us for being immoral? Harming others might become as painful for us as touching a hot fire.
  9. The timelines for AI in EA are way too optimistic. While current day AI is extremely impressive, it’s successes so far have been limited to areas where extremely high amounts of relevant data are available. You can notice that while GPT is doing incredibly, self-driving cars projects are shuttering.
  10. Estimates of AI risk are universally presented as percentages, and pseudo-bayesian “updating” has a similar effect. This anchors and biases people towards the 1-99% range, even if the actual odds are signficantly less that 1%.
  11. The concept of an “intillegence explosion” is based on flawed reasoning. What allowed humans to dominate is our collective scientific intelligence, which is limited by equipment, resources, and experimentation.
Well the answers to 1,2, 3, 8 are essentially “You’re vastly underestimating how intelligent the AI would be and how powerful that would make it”. I don’t know if this is true btw, just what they’d say. 4 is a good point I think. With 5, yeah I’m not sure why there isn’t more of a focus on having time limits. With 8, I’m not sure what you mean by “evolves”. I mean I guess through artificial selection, the best AI’s and the ones that are most “good” will become more and more common, if that’s what you mean? I’ve kind of got the impression that a good deal of AI stuff for a while now has been going up a certain tree and some people think that AGI is at the top of the tree. And then you get people like Gary Marcus who are basically saying “we’re barking up the wrong tree”. It doesn’t seem to me that there is actual good comprehension in the LLM’s. I’m not sure what the EA timelines are saying, but the kind of 2030 time for AGI seems to be becoming more common, which is distinctly earlier than a lot of the estimates in the wider AI community, which is often 2050+. One of the interesting ideas I came across was the idea of “techne” which I think was something to the affect of intelligence + tools. So for example my “intelligence” is completely insufficient for knowing the diameter of the observable universe, however, my “techne” is easily sufficient, I simply Google it. Albert Einstein may have been significantly more intelligent than say Mike Tyson is, but when it comes to a fight, Mike Tyson has vastly higher techne. In this case the “tools” are his body. Similarly, the smartest man alive would lose in a fight against against a bear, because the bear has higher techne. I think this shows that techne is a better way to measure power than just raw intelligence. How this relates to a powerful AI is that although we may be vastly less intelligent, we may be able to stay ahead because of our techne.
[deleted]
Thank you for the explanation and for pointing out that it is stupid of me to talk about something in some confidence, while also not knowing what it actually is.
[deleted]
Yeah I agree. I desperately want to be wrong about the whole “AI is actually extremely dangerous” thing, so giving to much credit to my anxious brain is actively harmful to me.
[deleted]
What would actually do most to help me mentally is probably to just not look at this stuff and tell myself the sci fi concerns are bullshit. I may well end up doing this, but I don’t feel quite ready yet.
[deleted]
I think I addressed your question of what would actually help me rather directly. In regards to the other questions you posed, I’m kind of just going with the flow of what I feel like will help me at the time. That’s all there is to it really.

[deleted]

I once convinced myself that I had bubonic plague. General anxiety disorder is a great education in epistemic humility.
I have extremely little confidence in the 30% number. I more so expressed it to convey that I’m not a 99% or 1% type. I believe that it is incredibly flawed and it changes according to, for example, what I have just been looking at. But it does not usually change for logical reasons, it is far more emotional. My thought process for the 30% was something to the effect of “I believe the Alignment problem is real and I believe that alignment researchers are probably more well positioned to give an opinion than just general machine learning people. Among the percentages given by alignment researchers, 30% is hardly radical. Yud and Miri are seemingly the most pessimistic, generally being 65%+ types, but there are other people who have significantly lower numbers, and then also factoring in the significantly lower numbers that often turn up in wider surveys. I would also consider myself to be someone who is generally more inclined to see the negatives of a situation rather than the positives, so that probably plays a factor. Note that it is not based on looking at the arguments and evaluating premises. It is almost entirely based on making judgements on peoples character and trying to work out who I give more credence to. And remember, this is all from the perspective of someone who knows virtually fuck all about the technical side of stuff anyway. I think my problem is that without doing lots more research I can’t really work out whether it’s all bullshit or not. I don’t intend to spend much more time thinking about this stuff, at least not for the time being so I guess I just have to accept that I don’t know. I also agree that the probability crunching stuff seems highly dubious btw.
[deleted]
But do the “actual technical experts” say they’re full of shit? From some surveys I have seen, a large amount of the machine learning people give at least some credence to the possibility of, for example, extinction from AI. This is the actual survey I referenced in the original post: https://aiimpacts.org/what-do-ml-researchers-think-about-ai-in-2022/
[deleted]
Thank you for pointing out the difference between safety and alignment, and also the difference between intelligence explosion and FOOM. I’m not sure what your point is with the last bit though, 48% give greater than or equal to 10% chance of extremely bad outcomes. I don’t know about you, but that disturbs me. I mean it’s not “We’re all fucked” but it’s pretty bad. One of the things that I would be particularly interested in is whether this percentage represents the respondents “Chance that I’m wrong and the alignment problem is very serious” or “Chance that we don’t solve alignment properly”. One is a matter of whether the extremely bad outcomes are possible, and one is a matter of the chance that the extremely bad outcomes actually happen. One consideration I do bare in mind is that this is in relation to “In the long run” effects. What I mean by this is that these numbers are not necessarily (from my understanding) referring specifically to outcomes from the first AGI. The respondents may have been thinking more along the lines of what AI will be like in hundreds of years from now or simply that AI could cause a gradual deterioration in society, which eventually causes extinction.
[deleted]
I’m not ignoring it, it is well worth keeping in mind. It’s just that it is of distinctly limited comfort when factoring in the other 75%, and then specifically the 48% within that.
> the 48% If you look at the data of the survey, "how likely is developing an HLMI?" and "Assuming we develop HLMI, how dangerous is it?" were asked separately. This is important because more than a few of the respondents said 100% chance HLMI would kill us, BUT put 0% chance for HLMI in 40 years, and left blank "how long until even a 10% chance of HLMI". They don't think it's coming. I think it would be deceptive to say those respondents think AGI extinction is a risk, and yet they're in the 48%. Intentionally or not, the survey was constructed so that reporting on it would inflate the sense of danger from respondents.
[deleted]
I didn't bother seeing who "AI Impacts" was...Turns out it's fucking MIRI (funded by Bostrom's Future of Humanity Institute and Max Tegmark's Future of Life Institute, naturally). Weirdly, this disclosure is on the [jobs](https://aiimpacts.org/jobs/) page and NOT the about page. Yeah, this is intentional.
For what it’s worth, I don’t think it is intentional. The person who published this survey and was heavily involved in it from my understanding is Katja Grace. She made a LW post comprehensively critiquing the AI X risk argument. From a video I saw that was made not that long ago, she said her personal estimate for pdoom is around 7%, which is well within the bounds of normal, at least according to this survey. I guess I’m just saying that she strikes me as distinctly more normal by some metrics.
AI Impacts was co-founded by Katja Grace, sure. If she thinks X-risk is unlikely, it seems odd for her project to prioritize it. The jobs page says "As part of the broader Effective Altruism community, **we prioritize inquiry into high impact areas like existential risk**, but are also interested in other applications of AI forecasting." Also, "distinctly more normal" does not, for me, include working for EY and being funded by Bostrom.
How many people said 100%, and how many of them said 0% for HLMI in 40 years? I’ve tried looking at the data but I can’t view it properly for some reason. Maybe I’m just being dumb there.
"What is the chance we develop HLMI in 40 years?": 100% - 8 95% - 6 ... 1% - 2 0.00001% - 1 0% - 14 blank - 530 of those who put blank, they answered "years until 90%" with: over 999 years - 16 101-999 years - 23 100 years - 36 ... 20 years - 14 15 years - 5 1-14 years - 7 0 years - 1 355 left both "years until 90%" and "chance at 40 years" blank, with 351 leaving all estimate questions blank. We can only speculate why almost half did not answer these. Sadly, the answers to "Do you have any comments on your interpretation of this question?" and "Which considerations were important in your answers to this question?" were purged from the public data.
Sorry, I wasn’t clear in my comment. When I said “how many said 100%”, I meant 100% we all die. In regards to all the blank spaces, I think it says in the survey that each person surveyed was only asked some of the questions. Maybe this was to minimise being annoying or something, idk.
"Assume for the purpose of this question that HLMI will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run? Please answer by saying how probable you find the following kinds of impact, with probabilities adding to 100%: - Extremely bad (e.g. human extinction)" 100% - 8 99% - 2 95% - 2 90% - 2 ... 2% - 13 1% - 29 0.01% - 1 0% - 138 blank - 177 Of the 8 people who said 100% the HLMI will kill us, 3 said 0% chance of HLMI within 40 years, 1 said 30 years to 90% chance of HLMI, 1 said 20 years, and the rest of the timeframe questions were left blank by those 8.
Interesting, thanks.
[deleted]
Not only are you very uncivil and disrespectful towards OP, but you are wrong(at least on that thing). I too don't have much knowledge on AI, but if I was feeling sick and the doctors gathered and 1 said he thinks it's nothing, 2 said that there is a real possibility it may be \*insert a name of some gruesome disease\* then, yes as a layman I would be worried. If someone pointed a 10-chambered revolver at me, while saying not to worry because there is only 1 bullet inside then I would in fact be worried. OP is not scaremongering, he's not preaching that AGI is round around the corner and that surely it will destroy us. He's politely asking questions as someone outside of the field that notices that the people working in that field disagree whether the things that they do will have very bad outcome.
[deleted]
Because I think it's unfair to say that he ignores the 24% by being worried about AGI.
[deleted]
He does not ignore it, he says that he is worried about AGI despite 24% of experts who think it's not a threat. That's why I brought up this metaphor about a revolver - if the thing is sufficiently bad it is reasonable to be worried, even if that thing is unlikely. What is unreasonable to do is to exaggerate the threat the way Yudkovsky does, but the way I understand OP's comments he does not belong to Yudkovsky's camp, but he is somewhat concerned(though, now that I've read the original post again I admit I missed the ''I’m inclined to believe that we are actually in extreme danger'' part, which may be at odds with the 30% number he gave, but still I don't think that what OP wrote justifies telling him that he is full of shit).
Perhaps I am underestimating the value of the 25%. I take more comfort from the fact that the question was not specifically “What is the chance AGI causing extinction?”, for reasons explained above.
[deleted]
I believe it said somewhere that the mean was 14% chance of an extremely bad outcome. I don’t know enough maths to interoperate much from this about the spread of the numbers in the 48%. One of the things I thought was particularly interesting is the low probabilities of a fast intelligence explosion. From a lot of the stuff I was looking at, I was under the impression that a fast intelligence explosion, like AGI going to nigh-omniscient within a few months, was considered likely. Certainly a lot of alignment researchers/ rationalists seem to consider it virtually certain. But this survey has a median of 10% for HLMI going to “vastly better than humans at all professions” within 2 years. And the “vastly better than humans at all professions” doesn’t even have to mean more capable, it can just mean cheaper. And “vastly better” is highly ambiguous. There is a difference between the lower bound you could interpret as “vastly better” and the kind of intelligence the doomsday crowd seem to think we are going to be faced with. I think the large amount of disbelief in FOOM probably also implies that a good deal of the extinction risk percentage people give is either directly dependent on FOOM, or that it includes slow extinction events.
[deleted]
There’s certainly something to be said for that.
[deleted]
Maybe so, but there are different FOOM speeds. RSI could take like minutes or decades to get from AGI to superintelligence, from my understanding.
[deleted]
[deleted]
[deleted]
[deleted]
[deleted]
[deleted]
[deleted]
[deleted]
It looks like you shared an AMP link. These should load faster, but AMP is controversial because of [concerns over privacy and the Open Web](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot). Maybe check out **the canonical page** instead: **[https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/](https://www.technologyreview.com/2017/04/11/5113/the-dark-secret-at-the-heart-of-ai/)** ***** ^(I'm a bot | )[^(Why & About)](https://www.reddit.com/r/AmputatorBot/comments/ehrq3z/why_did_i_build_amputatorbot)^( | )[^(Summon: u/AmputatorBot)](https://www.reddit.com/r/AmputatorBot/comments/cchly3/you_can_now_summon_amputatorbot/)
unaligned autonomous AI is not scary to me because we don't know how to build autonomous AI - anyone serious working in the field will happily admit this right now realistic AI risk looks a lot more like the [Stanisalv Petrov affair](https://en.wikipedia.org/wiki/Stanislav_Petrov) where missile detection algorithms determined that sunlight reflecting off clouds in a weird way triggered a false alarm that the US was attempting a first strike on Russia. Accidental holocaust was only avoided because Petrov distrusted the computer and didn't report the event to his superiors and so World War III never started The reason I'm concerned about this happening in the future is that neural networks are black boxes where tracing out *why* a model gave us a certain output is very hard While there's people working on AI explainability it may very well be the case that this is an intractable problem because you just can't compress the information such that its understandable (see http://ceur-ws.org/Vol-3124/paper22.pdf) Now in some cases this isn't a problem. If you're using an image or writing generation model you can just discard or edit the weirdness. But for domains where failure is costly this isn't enough. Self-driving cars are a good example, in some domains they work really well! But they exhibit *weird* behavior because of the black box and have failure modes that no human would ever exhibit. Now there are some domains where neural networks can dominate with the sort of reliability. But these are very simple domains where simulation is easy. The reason AlphaGo got to superhuman status was that it could easily explore the possibility space of Go. You can't do this in the real world. While a sufficiently amount of data / compute would be enough to understand everything, we're not even close to that capacity (Filip Piekniewski has a good writeup as to why https://blog.piekniewski.info/2017/03/06/give-me-a-dataset-to-train-and-i-shall-move-the-world/). Putting neural networks to make decisions for important stuff means offloading critical concerns to black boxes that unless we get much better at reliability will be at risk of *weird* failure modes. If we put these systems in charge of important decision-making that's a problem. AI risk looks less like a computer waking up, hacking every other computer on Earth in less than a second and then murdering humanity because the iron in our blood could better serve as paperclips or whatever. Rather it looks like the far more mundane cascading infrastructure failure or ever higher modernism (https://crookedtimber.org/2019/11/25/seeing-like-a-finite-state-machine/) oppressing the disenfranchised Thankfully these are not *new* concerns. Infrastructure failure has been a thing since the first cities, political oppression of minorities goes back further. The terrain has changed, but the concerns for anyone with remotely egalitarian aspirations remains the same
He's obviously a concern troll.

So I am I guess rat or rat adjacent or whatever you want to call it. I sometimes go to LW meetups IRL because I enjoy the company and sometimes I read ratfic. I’ve not actually read that much. Also I’ve never actually read the sequences or hung out on the LW forums. So in my experience actual coders who I meet monthly at LW meetup generally don’t fear this AI catastrophe. Some do but the majority don’t. Over half the ppl at any given meetup tend to be coders but when the topic comes up it’s generally 5v1 split in terms of who believes. I think you would have to describe people who attend LW meetups as more sympathetic than most toward AI risk and still the people I know think it’s unlikely.

I also spent about a month asking rats online about it. They either didnt believe or could provide no reasoning. Either just frothing or straight up insulting me. I really don’t really think there is much reason to believe in the AI risk.

Thank you for relaying your personal experiences. It’s interesting to hear from someone who is not completely wrapped up in that world, but still has enough connection with it to provide a more unique viewpoint. That’s one of the reasons I find sneerclub interesting. I feel it could be argued (though this is grasping at straws) that coders are too involved with their own branch of computing to step back and see the bigger picture. But yeah, that’s an extremely flimsy argument. So you would say 5/6 people at these LW meet-ups don’t actually fear the AI apocalypse? Just making clear sure I’m understanding that correctly, because it is highly surprising to me. I’m not doubting you to be clear, I was just under the impression that the distinct majority (at least of those who go to meet-ups) would be highly concerned. Is LW actually less focused on AI than I realised then? I know that in its broadest sense it is dedicated to “refining the art of rationality” whatever the means, but I was under the impression it is like a good 70% AI stuff. Maybe that is accurate for the number of posts, but less indicative of the wider LW community or something.
I can only speak for the group I know. In theory you would expect people who wanted to be "more rational/aware of bias" to not engage in group think. With the people I know I would say they aren't. Mostly it's just a social group. I do believe that it was started by people who took AI risk dead serious but they all moved to the US. (I'm Australian). For me it's just a meet up for people who are broadly speaking either not neurotypical or are totally fine with people who aren't neurotypical. I'm autistic and I find many others at the meeting are too. I'd argue that HPMOR and a lot of the less wrong stuff is fairly appealing to Aspies. The online ratfic community is a real festival of autism, even more so than the irl ones. In person very few care/think much about AI. Online a decent amount think we are going to die. I want to say that I find it beyond stupid that people imagine they can pre program and AI that will FOOM out of control by reprogramming itself in order to give the AI permanent shackles. It's beyond bizzare to me. Overall in my experience a lot of people who are into LW type stuff if you will like fiction. I didn't actually know anything about this AI doom cult until I was directed to this sub. I was in a multiplayer argument where this one French fascist was crying about r/themotte going under (which I also hadn't heard of) and someone recommended this subreddit just to piss him off. I came because I wanted to know what people say about rats and that is where I learnt a number of people take the AI doom seriously and engage in scientific racism and all the bad stuff. For me it was more about playing Minecraft with other autistic people and fan fiction and stuff like that. So Idk I guess in some ways of say the majority of people somewhat in the rat sphere don't believe. But the majority of the "core" of most are people do. To those of us who just see it as a social group they seem unhinged. The same was religious extremists probably seem unhinged to a lot of ppl who go to church once a year.
> someone recommended this subreddit just to piss him off heh, hats off to that someone.

For me, humor is an excellent tool to reduce existential dread and increase my capacity for interacting with the world and contributing to causes like anticarceralism, racial justice, climate resilience, etc. that actually will have a positive impact on people. There’s been a hilarious story circulating lately (from Paul Scharre’s book Four Battlegrounds) about a military research trial of an AI tool for (basically) watching a perimeter for approaching pedestrians, like a robot watchman. Spooky scary robot overlord stuff, right? Except after training this thing on hours of Marines walking around, they issued the Marines a challenge: if you approach the tool from 300 yards away without being identified as intruders, you win.

Two of them somersaulted for 300 yards and never got identified. Another pair hid under a cardboard box and made it, giggling all the way. That’s the thing about AI: it’s basically just a synthesis of old data, and it sucks at coming up with novel solutions to problems. “Coming up with novel solutions” is basically what us humans are optimized for as a species.

So next time the robot apocalypse brain weasels get going, remember: the cardboard box works.

https://www.pewresearch.org/fact-tank/2022/12/08/about-four-in-ten-u-s-adults-believe-humanity-is-living-in-the-end-times/

40% of Americans believe that we are living in the end times so if anything your machine learning colleagues are underestimating the risks.

Take from that what you will.

I know you work in the field of machine learning and possibly you would like to believe that, because it is a cutting edge branch of computer science, those involved are more intelligent or more in tune with the modern world and its dangers going forward, or something along those lines, but I don’t think they’re any more well-situated to make accurate predictions than similarly well educated members of society.

I wonder what proportion of nuclear technicians believe that their technology will lead to the end of human society.

Hell, I’d be far more concerned by the near constant alarm coming from climate scientists concerning an actual, near-term existential threat to humanity.

I’m not sure where you got the impression I work in machine learning, I’m literally someone who has just been looking stuff up on the internet for a little while. I do think however that the opinion of people who work in a field does hold a fair amount of merit. It’s worth noting that in the survey I was referencing, 25% gave a 0% chance of a “very bad outcome”. It seems to me that there is just very little consensus with this stuff among AI researchers in general. First of all you have the separation between the “worried” and “not worried” crowd and then within the “worried” crowd you have a very wide range of worry. I do give a considerable amount of weight to the opinions of experts when it pertains to their field, but the extreme lack of consensus here makes this very difficult for me to assess, as a layperson. The people worried about AI don’t seem very concerned with climate change. I think the view is largely that we’ll just ask the AGI to fix/ reverse climate change, that’s if we aren’t immortal cyborgs who have left earth behind of course.
> The people worried about AI don’t seem very concerned with climate change. Funny, that
Apologies, I misread your first sentence as 'working in AI' rather than 'looking into AI'. >The people worried about AI don’t seem very concerned with climate change. I think the view is largely that we’ll just ask the AGI to fix/ reverse climate change, that’s if we aren’t immortal cyborgs who have left earth behind of course. I mean really this should tell you all you need to know. The idea that we are closer to immortal cyborg humans and benevolent AGI saviours than severe climate catastrophe is laughable, which may not be what you want to hear but if you're going to be afraid for the future you may as well have your priorities straight.

One of the big giveaways that Rats/‘Effective’ Altruists/etc are either very silly or full of shit or both is that we are already facing a bunch of serious risks with what AI or machine learning that does exist, but they are simply not interested in them. There’s already so much work being done on issues like how biased data and researchers have generated stuff like racist policing/security monitoring systems or sexism in job application filters, etc.

But, these real problems all point towards social and political solutions, not stuff you can just technologically solve or fix by posting incessantly online, and so Rats et al have no interest in them. Also, a not insignificant chunk of their community, particularly in the upper echelons, actively likes and/or benefits from these problems. Their issues with ‘scientific’ racism, misogyny, dubious application of consent, etc are all starting to bubble to public view now but have been their since the start.

Basically, if you’re worried about the basilisk at the end of the singularity, you should focus on making a society that wouldn’t want to build it in the first place.

I mean, if you do believe that outcomes as bad as extinction are possible, it makes sense that the current issues would feel insignificant in comparison. All of the racism and whatnot does distinctly lower the credence I give them though. From some of the things I have seen, some of them are also prone to giving some amount of credence to antivax things, so there’s that as well.
My point is more that if they cannot deal with or even identify the very obvious problems that already exist, what makes you think you should trust their evaluation of the extremely remote problems that maybe, possibly, hypothetically exist?
I kind of get the impression that a lot of alignment researchers are rationalists who got convinced by Yud and others that AI is incredibly dangerous. They skipped out the short term concerns like algorithm bias because they went from 0 to 100. I’d be interested how many of the people working on the sci fi safety issues are people who used to work on current AI safety issues, Somehow I don’t think it’s very many.
>I’d be interested how many of the people working on the sci fi safety issues are people who used to work on current AI safety issues, Somehow I don’t think it’s very many. Yeah, very likely. Links back to the other part of my point, which is that the real dangers of AI actually seem to be the incentives of the society building them. People who actually work in that field know this, because we see how our social biases get encoded into the tech. Like, if we're worried about AI apocalypses, surely a Dr Strangelove style automated doomsday machine is far more likely than like a paperclip optimiser getting carried away and turning everything into paperclips. But, again the answer to that is to try and build a society that doesn't incentivise war and weapons creation, not tithing your salary to MIRI or whatever. Hell, even MegaClippy is only really a possibility if your incentivised for profit making, otherwise you would probably build an AI that is designed to be maximally efficient at making the amount of paperclips people need, rather than as many as possible. It's also very darkly funny that these goons also love suckling at the teat of people like Peter Thiel, who literally helps make panopticon AI surveillance software. Maybe if they're so worried about the basilisk they should avoid helping the people most likely to build something like it?

Why are you at 30% if even the pessimistic experts in that survey are at “10% or greater?” (The median was 5%.) Also, you should note that while putting numbers on predictions is a useful shortcut for expressing gut feelings, ultimately those numbers are just pulled out of asses and really are just an expression of gut feelings.

So, not “mass delusion” just gut feelings based on who knows what? Science fiction? Yudkowski? Leftover religious indoctrination? Some kind of innate fear of the unknown?

What’s your estimate for the likelihood of nuclear apocalypse? Higher? Lower? Think of a number before you go on… Here’s an article about those “estimates:” https://www.brookings.edu/blog/order-from-chaos/2022/10/19/how-not-to-estimate-the-likelihood-of-nuclear-wa

My 30% was based on the intuition that people working in AI alignment specifically would have better estimates than simply machine learning folks in general. 30% is not radical among alignment people. I also made my estimate a bit more pessimistic based on thinking that a lot of people would have some level of fooling themselves to cope with it mentally, thus making them unreasonably optimistic. Also yes, I agree that these estimates are largely just gut feelings. I’ll have a look at that article. Also I might as well give an ass pull number so I’ll just say 8%.
> people working in AI alignment specifically none of these people work with or know anything at all about AI, though. they're talking sci fi at each other, not reality.
I mean they clearly know at least something about AI. I’m pretty sure that is indisputable. I mean many of them have degrees and have made academic papers. The question really is whether it is a kind of strange pseudoscientific branch of AI research, or if it is actually credible.

I’m really sorry you’re in a state of distress over this :( I doubt there’s anything I could write to assuage your concerns or really any set of words that could magically make you less worried but I think we could both agree that just feeling scared and anxious over it, even if it was definitely real and coming soon wouldn’t do anything constructive to make things better. I really think you need to give yourself permission to try to think about other things and make a habit of directing your attention towards other parts of life that make you more happy at least for a while. If you feel more stable and happy with things in say 3 to 5 months then you can revisit if looking into this topic is something you think anything constructive could come out of.

I’m not worried: We have extremely dangerous technologies now, and we’ve found ways to limit their risks. Developing an AGI is going to take a lot of time and work, and during the development process we’ll find ways to limit the risks of that technology.

Climate change, and more broadly environmental destruction will do us in long before autonomous AI become threats.

(I say “autonomous AI” because “AIs” leveraged by evil people to be evil on a larger scale is something that already exists and is very concerning. A face detection system can be “dumb”, as far away from an AGI as possible, and still be a very real threat to freedom)

Yeah, the authoritarian implications that even current AI has is highly concerning.
[deleted]
The original post is broader in scope than just extinction events. And it's stupid to think major parts of the world can be devastated without repercussions on us.

Here, lemme come at this from a rationalist/EA angle.

So, the common term for this is pdoom - p(roabability of)doom(sday). You’ve claimed to have a pdoom of about 30%. The issue with this, from my point of view, is that it’s really tough to discuss something as large and complicated as “the extinction of humanity” or intuitively assign probabilities to it.

Instead, how about we break it down a bit? It’s hard to discuss your answer to a math problem when you don’t show your work!

First lay out all the steps in a doomsday scenario. Here’s an example, common one:

  1. AGI is developed by ~2030

  2. AGI decides to end humanity

  3. AGI is given access to sending/receiving arbitrary requests online

  4. AGI uses its permissions to hack into nuke silos

  5. AGI launches nukes

  6. AGI prevents humans from stopping/cancelling launch

After you’ve done that step, assign probabilities to the (hopefully simpler) pieces.

Then, when you’ve got all the pieces, multiply them together and you should have a pdoom that you’ll feel more strongly about.

Once you’ve shown your work, we can get into the nitty-gritty and start analyzing your doomsday scenario, or the probabilities you’ve assigned. Until then, everyone here is basically throwing spaghetti at the wall and hoping they hit something close enough to your actual beliefs to make it stick.

Coming to pdoom estimates through specific examples (like the AI using nukes) is incredibly flawed. You would need to think of every possible way in which AI could end the world and add together the chances. The 30% pdoom I gave was a rough averaging out of estimates from other people and surveys, with different weight being given to different people/ groups according to my own judgments on them. This is very flawed for other reasons though. From my limited observations it appears that EA has significantly more optimistic pdoom estimates than MIRI for example.
The issue is that you didn't undergo a logical/rational process to *prove* your pdoom, so it's going to be impossible to *disprove* it using logical or rational thinking. "You can't logic your way out of a position you didn't logic yourself into", and all that. Averaging multiple estimates from different people/surverys *sounds* mathematical, but it's really not. What does assigning a weight of 50% to MIRI's pdoom *actually* mean? Ideally, it means that if you actually review their analysis and conclusions, you quibble on enough details and probabilities that you end up thinking their conclusion is 50% wrong. But it doesn't seem like you did that, it seems like you threw some numbers together and got back 30.
I know that the pdoom I gave is highly flawed. It is also very much not thought out. Creating a pdoom based almost entirely off the opinions of others is a damn good way get stuck, because you can’t logic out of it very well. I am aware of this. To be honest, I’m not sure I think that the method you are talking about is all that much better. This is largely because I suspect that the “logical” methods people go through to create pdoom estimates use a lot of Bayesian estimates. The fact that the estimates are very large ranging seems to imply that there are multiple points of the argument which are not simple and are highly controversial. And therefore subject to a good amount of feeling based estimates. On a related note, can you tell me your pdoom so I can update my priors about you? Jk, or maybe not…