I have only been looking into AI for a short time. It is very distressing to see so many people who at least appear intelligent who are claiming that there is a damn good chance that we are all going to die. If you asked me right now how likely I think it is that AGI will cause human extinction I would probably say around 30%, and that’s just extinction. Not to mention for example the possibility of a terrible authoritarian dictatorship which lasts forever.
There are also a lot of somewhat strange things about the AI alignment/ EA/ longtermism, etc community. The links with controversial racist scientific ideas, for example. There’s also the fact that it just seems like a cult on so many levels. With that being said, I’m inclined to believe that we are actually in extreme danger. The amount of people who are worried about this is not small. In a survey conducted last year, 48% of machine learning researchers assigned a 10% or greater chance of an “extremely bad” outcome. It seems to me that for this not to be a real issue, there has to be some kind of mass delusion on a large scale, where the people who believe it arguably have a very strong incentive not to.
I have no credentials, I can’t form my own opinion on the plausibility of FOOM, for example. I’m really not sure what to think. This seems to be the biggest place which is largely critical of rationalism, which of course has large links with the whole AI safety community, so I’m hoping that I may get a different perspective here.
Should I be as worried as I am?
Probably this topic has already been discussed, you can search a bit in the subreddit
This said, the rationalist community sure believes in a “extinction risk” by a rogue AGI. But this is based on multiple claims backed on no evidence. And as you said, it has some cult characteristics.
Now this, I would say that aside this kinda ridiculous community, there sure are a lot of intelligent people who have a very bleak view of the future. Either climate change that will wreak havoc everywhere, politics going to fascism in a lot of countries, capitalism that gets all the wealth and people that are having more and more issues to earn a living correctly.
Internal peace can be found even with a belief of doom like one of those. Basically, our beliefs can be wrong. Things can be less bleak in reality, we cannot predict what will happen. Or not in detail as a person. Also as an individual we have not much responsability.
If you feel in distress for this, you might also have other mental vulnerabilites, so it might be interesting to see a councelor.
Cheers
EDIT: links in the community about AGI https://www.reddit.com/r/SneerClub/comments/yqa5nm/resources_for_arguments_against_the_bostrom_lw/ https://www.reddit.com/r/SneerClub/comments/10buvl1/some_rationalists_experience_a_small_epiphany/
The level of computer literacy exhibited by the Rationalist leaders (recent examples 1 2) should be enough by itself to tell you not to take their opinions about AI seriously, no?
More seriously, I think the “longtermist” worldview is basically a reactionary stalking horse. Worrying about a hypothetical AI destroying the world in an unspecified fashion to make paperclips is an awfully convenient reason not to worry about the actual AI safety issues which really exist right now: policing and justice (eg, racial biases being consolidated by predictive models), online targeting of vulnerable individuals and dissenters, misinformation facilitated by text and image generators…
Climate change, racial justice, pandemics, war, etc, don’t seem so important if we live in a simulation, or compared to the problems of a trillion intergalactic humans a thousand years from now. No wonder Peter Thiel is into this shit.
There are lots of things to worry about in the world but getting eaten by a robot basilisk is not one of them.
I’m a machine learning engineer finishing up my masters thesis. In short, AGI is incredibly far away due to parametrically bloated models and probabilistic shortcuts. While models like GPT-3.5, YoloV5, and Bert seem impressive, the more you prod the further away from AGI we are. I wouldn’t worry.
Some weird comfort for you: I’m too busy worrying about the very real overlapping crises we’re already facing to worry about AGI, but the fact that a lot of these AGI-terrified people dismiss the threat of climate change (among other examples) reinforces me not taking them seriously. Like sure, maybe human extinction would be sad, but I’m a lot more worried that my region already has more tornadoes than snow. If you’re struggling with the broader idea of mass human suffering and potential extinction, the growing field of climate psychology might have useful things for you.
Also, we’re all going to die. I think that fact is at the heart of a lot of the more extreme rationalist/AGI’s perserverating.
That’s OK, neither do they.
I’ve been making my own list of reasons why AI-risk is overrrated, here is a dump of points:
[deleted]
So I am I guess rat or rat adjacent or whatever you want to call it. I sometimes go to LW meetups IRL because I enjoy the company and sometimes I read ratfic. I’ve not actually read that much. Also I’ve never actually read the sequences or hung out on the LW forums. So in my experience actual coders who I meet monthly at LW meetup generally don’t fear this AI catastrophe. Some do but the majority don’t. Over half the ppl at any given meetup tend to be coders but when the topic comes up it’s generally 5v1 split in terms of who believes. I think you would have to describe people who attend LW meetups as more sympathetic than most toward AI risk and still the people I know think it’s unlikely.
I also spent about a month asking rats online about it. They either didnt believe or could provide no reasoning. Either just frothing or straight up insulting me. I really don’t really think there is much reason to believe in the AI risk.
For me, humor is an excellent tool to reduce existential dread and increase my capacity for interacting with the world and contributing to causes like anticarceralism, racial justice, climate resilience, etc. that actually will have a positive impact on people. There’s been a hilarious story circulating lately (from Paul Scharre’s book Four Battlegrounds) about a military research trial of an AI tool for (basically) watching a perimeter for approaching pedestrians, like a robot watchman. Spooky scary robot overlord stuff, right? Except after training this thing on hours of Marines walking around, they issued the Marines a challenge: if you approach the tool from 300 yards away without being identified as intruders, you win.
Two of them somersaulted for 300 yards and never got identified. Another pair hid under a cardboard box and made it, giggling all the way. That’s the thing about AI: it’s basically just a synthesis of old data, and it sucks at coming up with novel solutions to problems. “Coming up with novel solutions” is basically what us humans are optimized for as a species.
So next time the robot apocalypse brain weasels get going, remember: the cardboard box works.
https://www.pewresearch.org/fact-tank/2022/12/08/about-four-in-ten-u-s-adults-believe-humanity-is-living-in-the-end-times/
40% of Americans believe that we are living in the end times so if anything your machine learning colleagues are underestimating the risks.
Take from that what you will.
I know you work in the field of machine learning and possibly you would like to believe that, because it is a cutting edge branch of computer science, those involved are more intelligent or more in tune with the modern world and its dangers going forward, or something along those lines, but I don’t think they’re any more well-situated to make accurate predictions than similarly well educated members of society.
I wonder what proportion of nuclear technicians believe that their technology will lead to the end of human society.
Hell, I’d be far more concerned by the near constant alarm coming from climate scientists concerning an actual, near-term existential threat to humanity.
One of the big giveaways that Rats/‘Effective’ Altruists/etc are either very silly or full of shit or both is that we are already facing a bunch of serious risks with what AI or machine learning that does exist, but they are simply not interested in them. There’s already so much work being done on issues like how biased data and researchers have generated stuff like racist policing/security monitoring systems or sexism in job application filters, etc.
But, these real problems all point towards social and political solutions, not stuff you can just technologically solve or fix by posting incessantly online, and so Rats et al have no interest in them. Also, a not insignificant chunk of their community, particularly in the upper echelons, actively likes and/or benefits from these problems. Their issues with ‘scientific’ racism, misogyny, dubious application of consent, etc are all starting to bubble to public view now but have been their since the start.
Basically, if you’re worried about the basilisk at the end of the singularity, you should focus on making a society that wouldn’t want to build it in the first place.
Why are you at 30% if even the pessimistic experts in that survey are at “10% or greater?” (The median was 5%.) Also, you should note that while putting numbers on predictions is a useful shortcut for expressing gut feelings, ultimately those numbers are just pulled out of asses and really are just an expression of gut feelings.
So, not “mass delusion” just gut feelings based on who knows what? Science fiction? Yudkowski? Leftover religious indoctrination? Some kind of innate fear of the unknown?
What’s your estimate for the likelihood of nuclear apocalypse? Higher? Lower? Think of a number before you go on… Here’s an article about those “estimates:” https://www.brookings.edu/blog/order-from-chaos/2022/10/19/how-not-to-estimate-the-likelihood-of-nuclear-wa
I’m really sorry you’re in a state of distress over this :( I doubt there’s anything I could write to assuage your concerns or really any set of words that could magically make you less worried but I think we could both agree that just feeling scared and anxious over it, even if it was definitely real and coming soon wouldn’t do anything constructive to make things better. I really think you need to give yourself permission to try to think about other things and make a habit of directing your attention towards other parts of life that make you more happy at least for a while. If you feel more stable and happy with things in say 3 to 5 months then you can revisit if looking into this topic is something you think anything constructive could come out of.
I’m not worried: We have extremely dangerous technologies now, and we’ve found ways to limit their risks. Developing an AGI is going to take a lot of time and work, and during the development process we’ll find ways to limit the risks of that technology.
Climate change, and more broadly environmental destruction will do us in long before autonomous AI become threats.
(I say “autonomous AI” because “AIs” leveraged by evil people to be evil on a larger scale is something that already exists and is very concerning. A face detection system can be “dumb”, as far away from an AGI as possible, and still be a very real threat to freedom)
Here, lemme come at this from a rationalist/EA angle.
So, the common term for this is pdoom - p(roabability of)doom(sday). You’ve claimed to have a pdoom of about 30%. The issue with this, from my point of view, is that it’s really tough to discuss something as large and complicated as “the extinction of humanity” or intuitively assign probabilities to it.
Instead, how about we break it down a bit? It’s hard to discuss your answer to a math problem when you don’t show your work!
First lay out all the steps in a doomsday scenario. Here’s an example, common one:
AGI is developed by ~2030
AGI decides to end humanity
AGI is given access to sending/receiving arbitrary requests online
AGI uses its permissions to hack into nuke silos
AGI launches nukes
AGI prevents humans from stopping/cancelling launch
After you’ve done that step, assign probabilities to the (hopefully simpler) pieces.
Then, when you’ve got all the pieces, multiply them together and you should have a pdoom that you’ll feel more strongly about.
Once you’ve shown your work, we can get into the nitty-gritty and start analyzing your doomsday scenario, or the probabilities you’ve assigned. Until then, everyone here is basically throwing spaghetti at the wall and hoping they hit something close enough to your actual beliefs to make it stick.