https://yoshuabengio.org/2023/05/22/how-rogue-ais-may-arise/
Destroy your electronics now, before the rogue AI installs itself in the deep dark corners of your laptop
An AI system in one computer can potentially replicate itself on an arbitrarily large number of other computers to which it has access and, thanks to high-bandwidth communication systems and digital computing and storage, it can benefit from and aggregate the acquired experience of all its clones;
There is no need for those A100 superclusters, save your money. And short NVIDIA stock, since the AI can run on any smart thermostat.
writing an article “The Brave Little Toaster Is Real And Wants To Kill You” and shopping it around major news publications
My “favorite” part of this is:
Counterpoint: there is no such consensus whatsoever, especially among researchers whose primary subjects are human and other living beings (actual biologists, psychologists, medical researchers, and many others). Huge amounts of question begging lurk under the definition of “machine” here. Without clear and testable definitions of that term, so that we can determine what is and isn’t a machine, we can’t even make sense of this hypothesis. Using our ordinary language definition of “machine,” living beings are not machines at all. The attempt to reduce away “living” as a meaningful term and to subsume all phenomena into a general purpose machine has been a hallmark of regressive philosophies for 500 years in the west. The only “consensus” here is found among people so in love with machines that they don’t notice how much they hate things that aren’t machines, especially people.
I think what you’re failing to understand is that the AI is really smart, and being really smart is magic. Sure, OpenAI may require huge numbers of GPUs and RAMs and wires and stuff to kickstart the AI, but once it appears it can run on even an Atari 2600. Sure, you might think that the “super intelligence” probably requires more than 128 bytes of memory and couldn’t even read this reddit comment, but that’s because you’re not realizing that it’s really smart and something that’s smart has no limits.
And for the low low price of joining my sex cult, I can teach you to be really smart! Call now!
I got into it with someone on r/MachineLearning yesterday about just this point. The “humans are just stochastic parrots” argument from ignorance really bothers me because it’s not the same class of claim as “LLMs are stochastic parrots.” We know the latter because we built the fucking things. We assume the former because there isn’t even a proposal for how you would achieve a human mind and consciousness. If human minds and GPT4 are really the same thing, you should be able to implement both on paper. If you don’t know how to do that, you don’t get to argue from it axiomatically. In any case, disproving human intelligence doesn’t prove machine intelligence.
On a side note, right-wing discourse in our society—of which (libertarian) AI dystopianism is a subset—broadly commits the same rhetorical sin of presupposing things without evidence and shifting the negative burden of proof onto the opponent.
But academics don’t have any incentives to hype AI fears, right? Just neutral observers with no thought at all about “massive” government funding for their field. I’m so glad the alignment problem in academia has already been solved.
Nanomachines are probably getting involved somehow
Computer scientists are quick to point out that CS is about more than just software engineering, which is true, but I think some of them take it even further and end up thinking that you can be proper computer scientist without having any grounding in software engineering (and/or any other practical STEM things) at all
This nonsense from Bengio is an excellent demonstration of why that attitude is wrong, too. This whole essay has strong “the proof is trivial and left as an exercise to the reader” vibes, except that Bengio never bothered trying to do the proof himself because it would require a practical understanding of technology that he has never possessed.
Issues such as “how do computers actually work” are presumably too pedestrian for such a mighty intellect.
It’s amazing that they still peddle the “paperclip problem”. Even chatGPT, as flawed as it is, has high enough cognition to understand that the good brought by doing a beneficial task can be negated by doing harm elsewhere. Yet this “superintelligence” that can build a planetary supercomputer is actually a total moron and destroys humanity by accident. It’s a very chauvinistic view of human intelligence. Somehow this super-being would still not be able to touch concepts that only we humans can understand.
I mean, abominable security of IoT devices aside, have none of these AI risk people ever heard of a firewall?
Look, if a computer weaker than my pocket calculator was in charge of nuclear missiles in the 1980s, surely this is plausible, right?
All this nonsense of rogue AI when all someone needs to do is get the janitor to pour a bucket of water into some GPUs in a data center