EDIT: Want to reiterate, as I said below, that I’m very much aware of stuff like bias in police algorithms, PredPol, etc. That shit is scary as fuck. But I’m asking your opinion on the more sci fi stuff.
Hello sneer aficionados,
I come to you today requesting opinions on what the actual level of existential threat from AI is. I know this sub is pretty skeptical in that regard, which is exactly why I’m asking.
Now, I fell down the ratsphere rabbit hole a couple weeks ago. I was immediately suspicious of how much crankery I whiffed, which led me to you guys, and you confirmed many of my suspicions. But then I also learned that some respected people in the ML field (Stuart Russell) have started to take the AI problem seriously too. I began a second, more critical dive into the subject, making sure to read skeptics like Melanie Mitchell and Rodney Brooks too. And though I get now why some of the ratsphere ideas around AI are pretty contentious, I guess it still surprises me that the actual AI establishment isn’t taking any of the ideas seriously. But, unlike Yud, I also know that’s probably because I’m not understanding stuff that the establishment does, and not because I’ve randomly managed to think better than them all with no qualifications. I admit though, reading a discussion like this one between Le Cunn and Russell:
Leaves me far more in agreement with Russell, which as I understand is not the orthodox position in the field.
Basically, I need someone to give it to me straight (because I don’t trust Yud et al to do that). How genuinely worried should I be about the existential threat from AI? (And yes, I’m already aware of current, already existing risks like bias and unemployment, which are arguably equally as scary) How worried should I be, especially with regard to very new research like Lamda and Gato, with transformer models seeming to progress very quickly and with surprising properties, seeming to get quite general, scaling law, etc.? A lot of the AI establishment seems to be guessing we’ll get to something like AGI around 2050. With that date so close given how poorly understood the threat is, shouldn’t we be doing more to prepare?
Sneer me if you see fit. Maybe it would snap me out of my anxiety. But serious responses are also very appreciated, especially from those actually in this field of research. If there’s any good, comprehensive rebuttals that illustrate pretty clearly why the Bostrom position on AI is wrong, I would love to read them. Instrumental convergence, etc., is kinda freaking me out. (Sorry to the mods if this thread has to be axed)
In the year two thousand and twenty-two, “AI” is still just marketing jargon for machine learning, which means more and more scalable algorithms for finding patterns in enormous piles of data without supervision. It does not resemble cognition. In Tversky and Kahneman’s terms, we are building ever more scalable versions of an automated System 1, but the possibility of an automated System 2 is still very remote and frankly not even what most “AI” people are trying to achieve. If anything, the burgeoning field of deep learning is pushing farther away from cognition-like development, as the algorithms get less and less connected to any kind of rational model because you don’t need rationality when you have big empiricism.
Will actual AI someday become a thing to worry about? Maybe, but first it has to become a thing, and first we have to survive so many other real things that are already civilization-existential risks: climate change, nuclear war, the collapse of the liberal democratic world order, a random meteor. Or, perhaps approaching that level but not there yet, long before AI exists we’re already seeing enormous harm done by “AI”: attention-economy advertisers like Facebook and YouTube are already responsible for political destabilization, genocide, and backlash against vaccination and other safety measures in the middle of a historic pandemic. We’ve just spent two years reckoning with a global natural disaster and the “AI” that decides which post or video to show you next is responsible for many of the failures of our attempts as a species to get through it; imagine how it would, or does, exacerbate the others I listed.
What’s missing from any Rationalist discussion of AI risk is what exactly the AIpocalypse would look like. It’s generally just assumed that an AI is an entity with an IQ of a zillion, and since IQ is the single metric that measures an entity’s entire power and worth, that would obviously mean we’ve created an omnipotent god, rather than an emergent feature of a fancy machine that stops as soon as you cut off its power supply (not unlike a human mind). But we already know what it would look like if an entity that lives inside our computers tries to destroy our civilization, because it’s already happening, in the dumbest way and for the dumbest reasons.
Honestly the biggest danger of AI is letting idiots and shitheads control it, same as any other tool.
Take, for example, industrial automation. It was initially theorized by more optimistic thinkers that automation would make goods so cheap and plentiful that almost everyone would be able to work two or three hours a day to meet everyone’s needs and free up time for everyone for leisure and learning.
Instead, because these industries were controlled by a wealthy owning class with an interest in extracting as much profit as possible from the labor of their workers, it led to an era of misery and brutality for the working class, in which a small class of robber-barons were able to amass theretofore unimaginable wealth, being able to reinvest that wealth into protecting their interests and violently putting down any and all efforts by workers to demand a greater share of the wealth they produced via private security and collaboration with a state whose politicians they bankroll. The machines themselves could often be dangerous, but the primary danger came from the system built around those machines, and the power held by the people running them.
I’m not scared of Alexa and Facebook, I’m scared of Jeff Bezos and Mark Zuckerberg.
First off, whenever you see a date estimate from these futurists, consider how wrong all of the date estimates have been for the history of artificial intelligence research. The 2050 date is completely made up, but is 30 years in the future, and the singularity has been predicted as happening within 30 years for … the last 30 years. We’re definitely getting close to creating animal or infant-level intelligence, but it took evolution a long time to get from that to true human intelligence, and then we’re supposed to make a jump to super intelligence all within the next 30 years, as well as solve the energy problems needed to power it? It’s not happening by 2050, but it’s definitely possible in a few hundred years.
It’s ridiculous to try and predict the details of something that far away and complicated. I look at it this way: We all die and only live on through our influence on others. If the superintelligence is alive enough to make real scientific advances on its own and create a workable and sustainable society, then I consider it part of my legacy as a human.
Trying to stop advancement wouldn’t work without some sort of inquisition tied to dystopia, so we should do what we can to increase the amount of empathy and humanity used when creating tools like AI. If the researchers are motivated by human goals like kindness or respect, the design of the AI will be more likely to respect those goals, because it is a lot like raising children. As a society we need to concentrate on spreading positive human values to our children, both human and artificial
[deleted]
AGI will be our only hope for not extinguishing the light of consciousness, as AGI is the only conscious thing we know which can operate above 50 degrees Celsius.
So it destroying humanity or not really doesn’t matter, as long as we give birth to AI and marvel at our own magnificence while the real turns into a desert.
But yeah, seriously, not sure if AGI is possible at all, or if it can just generate more intelligence easily (I think you run into networking problems pretty fast. Just look at how dumb some people are who are considered high IQ geniuses ‘Random? just pick 1 every 10 that is random random enough’ or something). And well, the AI establishment used to guess 2020, so the date isn’t that close, and new research always seem to move quickly with surprising properties, has been for the past 30 years. (If you listen to AI researchers trying to promote their work). The existential threat part comes imho after a long line of IFs, and I think enough of those will torpedo the whole threat.
What should be more worrying is that one of the important AGI safety research people went from ‘I need to learn how to create safe thinking machines’ -> ‘wow this is difficult, and people all think in various ways’ and then went ‘I should teach everybody to think like me’ and not ‘we should explore and categorize all the different ways of thinking without judgment’. And he got millions of dollars for that.
And sorry if this makes little sense, I had a few beers.
E: and well, in a way I am sympathetic to the whole project, I would like live forever in a life of leisure tbh.
AI as it really and currently exists is dangerous, but not the way rationalists think it is. I’m convinced at this point that we will not in any foreseeable future create an AI with any kind of personality or anything more than technical self-awareness. That appears to be an entirely different set of understanding, that of consciousness, which we’re making very little progress on.
If we come to understand consciousness in humans, and then come to understand consciousness in life in general, then we might be able to make an artificial version of the same. The only other way is by accident, and the AI we have now does not exactly instill confidence in that regard.
The main danger is people trusting in the bizzaro logic of non-conscious AIs, which as already mentioned in this thread is largely just taking human biases and making them really fast, really stupid, and “trustworthy” because they come from The Machine instead Bob the Nazi.
A real AI won’t try to escape its box and convert the world into paperclips even if you tell it to, because it can’t understand an idea like that in the first place. I don’t think AGI is physically possible either - all intelligence is specialized and limited in some ways.
I consider Timnit Gebru, and tangential researchers, to be pretty on the money when it comes to AI ethics and risk.
What I don’t consider to be a risk is the science fiction EA/SSC/etc consider to be the risk of AI. I’ve written about it at length here, but to sum it up, we are nowhere near general AI at all. We don’t even know how the brain or intelligence work. This generation of machine learning is not intelligence, and it will never be “intelligent”. We are barking up the wrong tree if we expect AGI to come out of current ML applications and research.
EA/SSC/etc have spooked themselves into a frenzy over what amounts to science fiction. There are real AI ethics and risk researchers out there, but they’re dismissed wholesale by EA because the researchers won’t entertain their fictional fears.
The AI bros want you to think that their systems could turn into gladOS at any moment because it sounds cool and generates shareholder buzz for what is essentially a big markov chain built on mass-gathering of human data.
AI is not smart; it’s stupid. And stupid people will deploy it and cause a bunch of stupid problems
By the way, I mostly agree with Le Cunn here:
It is convenient that Le Cunn also happens to be a leading industry expert, who has designed more real world systems and learning resources than all of these guys combined, I trust his opinion far more than any of theirs. He actually has the degree and portfolio to back up his claims and doesn’t come across like a pie-in-the-sky crackpot.
The more I think about it, the less sure I am that an AGI would win a conventional war against humanity, assuming we’d realised it escaped.
It’s a weird case where the less advanced humanity has an advantage. If you dropped a laptop with a murderous AGI into medieval england, its battery would die out before it could accomplish anything. If you had a rogue AI in the 60’s before the internet, the AI would be inherently localised to a computer somewhere, which could be bombed, have it’s electricity cut off, etc. It’s only when you start introducing the internet and heavily computerised society that the AI starts having a shot.
If you think about it in war terms, the AGI is starting off with zero weapons, zero industrial base, zero land under it’s control. It has to reach self-sufficiency just with whatever it can hack into. If it wants to make a drone factory, it has to ship raw materials from somewhere, then assemble them into parts, then assemble those into a full machine, figuring out a way to substitute for the human labor that is currently necessary in every step of that process. Even if the computer thinks 10000 times as fast as humans, it can’t build this industrial process 100 times as fast as humans do, it’s limited by the speed of physical things like trucks. I think if we catch it early enough, the AGI can be defeated.
Climate scientists say we less than a 10 year window to take meaningful action to mitigate the catastrophic damage of climate change. Climate change is real, it’s not just a “guess” from self-declared experts, and there are known tangible things we can do to prepare for it but aren’t doing.
Terminator robots are fake pretend movie villains. The real planet you live on is in danger right now and there are actual things that can be done about it. Worry about that instead.
The problem with AI doomerism is that it has an inflated view not just of what machine intelligence is capable of, but of what machines in general are capable of.
The kind of industrial base which can sustain a hypothetical optimizer AGI is the product of a vast surplus, and it requires intense labor to maintain and operate. Trying to remove the human element from the equation only increases the load upon the system, and there’s no real magic wand to make that problem go away; e.g. it’s not obvious that there’s any molecular nanotech you can install which would be meaningfully different from the organic life that we already have. The prospect of a singleton entity which can take optimize an entire planet - let alone a universe - without integration as an ecosystem-like environment is especially dubious.
We’ll see major disasters if we continue our current trajectory w/r/t machine intelligence, but it’s more likely to look like the automated profiling and industrial accidents that we already have. Not something new and novel which tiles the universe in dead paperclips.
Not an expert, but I do like to write small AI programs which means i’ve put in about as much work as Eliezer has.
AGI, as in something that can be given any input and return a logically optimal output, and searches for those inputs completely independently, is a possibility in our lifetime. However, it will not come in 10 years like Eliezer suggests, and it will not look as he suggests either. I also don’t think it will come in 30 years like industry experts suggest, because all the industry experts are working with transformers and neural networks, which are not as theoretically scalable as reinforcement learning systems (though a hybrid may be more so)
There are are too many things getting in the way of AGI going on a murder spree,
Now, don’t get me wrong, a truly good AGI will bee able to think of things unimaginable to the human mind. But in 30 years? And it will be more of a risk than the current system we have now? I really doubt it.
[deleted]
If “AGI” is defined as “a large system that can process and has access to more data than any single human being, and that can take actions in a goal-directed way, and has goals that are not perfectly aligned with human flourishing” – then guess what, AGI is already here, and it’s called capitalism.
When you start to actually run through the AGI doomsday scenarios, it turns out the real risks of AGI are not AGI itself but all of the traps we’ve built for ourselves along the way. The evil AGI could get access to the nukes and kill everyone – okay, so why are there so many nukes standing around and ready to fire? The evil AGI could accelerate capitalism and co-opt a corporation to build exactly the things it needs to kill everyone. Okay, well, why are corporations so vulnerable to capture? Why are they allowed to run off and pursue anti-human goals? Etc. etc.
Every evil “AGI” problem is a smaller problem in disguise that we already have with evil humans. If you focus on solving the traps we’ve already laid for ourselves, you both neuter the opportunities for any future evil AGI that may or may not come, but more importantly you solve an actual problem that humans are already facing (nuclear proliferation, war, global warming, famine, slavery, etc.) and remains a threat today.
The sci-fi scenarios can happen in a couple of decades in my opinion if an AI is able to use stolen identities from identity theft data to act as a human agent
But in the short term, biased models are the biggest threat, followed by political use of deepfake technology on a large scale making media and news even harder to trust
I think this rebuttal of AI risk concerns is pretty good. For what it’s worth, I posted this to /r/controlproblem and they didn’t think it was very good, they said the rebuttal basically boils down to “what is intelligence anyway?”. But I think it’s a good question, the so called rationalists treat intelligence like it’s some sort of superpower, instead of it being doing well on an IQ test and having advanced cognitive capabilities, which is helpful in life, but not the superpower rationalists think it is. You might also be interested in Magnus Vinding’s book Reflections on Intelligence, it’s available on Kindle for 3 dollars.
I thing good AI will be pretty terrible, probably gonna cause lots and lots of problems or kill tons of people when it happens. Hopefully that won’t be for a long time but who knows. Not a lot I can do about it.
It’s not so much the computers as it is the people implementing ’em.