Did you ever think you’d have to explain to people that AI
isn’t about to take over the world? That must be odd.
It’s certainly a new concern. For so many years, AI has been a
disappointment. As researchers we fight to make the machine slightly
more intelligent, but they are still so stupid. I used to think we
shouldn’t call the field artificial intelligence but artificial
stupidity. Really, our machines are dumb, and we’re just trying to make
them less dumb.
Now, because of these advances that people can see with demos, now we
can say, “Oh, gosh, it can actually say things in English, it can
understand the contents of an image.” Well, now we connect these things
with all the science fiction we’ve seen and it’s like, “Oh, I’m
afraid!”
Okay, but surely it’s still important to think now about the
eventual consequences of AI.
Absolutely. We ought to be talking about these things. The thing I’m
more worried about, in a foreseeable future, is not computers taking
over the world. I’m more worried about misuse of AI. Things like bad
military uses, manipulating people through really smart advertising;
also, the social impact, like many people losing their jobs. Society
needs to get together and come up with a collective response, and not
leave it to the law of the jungle to sort things out.
That's the heart of the problem with the technophiles: they want a "law of the jungle" because it warms their edgy sociopathic hearts, yet don't realize they'll be pretty early-on prey in it.
I can hardly sneer at was is a reasonable and measured take on AI and their broader impact on society.