r/SneerClub archives
newest
bestest
longest
18

Considering the utter failure of projects like SI/MIRI to produce any results outside Harry Potter fanfic, is any of the stuff they propose even feasible on a nuts and bolts AI level? Here’s Yud’s views on symbolic and connectionist AI for a start:

https://www.lesswrong.com/posts/juomoqiNzeAuq4JMm/logical-or-connectionist-ai

I don’t know- AI is a very fast-moving and ill-defined field, and I think even an expert who tries to predict where it’s going has a larger-than-usual chance of being completely incorrect.

That being said, Big Yud is so clearly bad at talking about AI that I trust him about as much to talk about AI as the average CS undergrad.

Examples from this piece:

I would even say that a neural network is more analyzable [than a symbolic system] - since it does more real cognitive labor on board a computer chip where I can actually look at it, rather than relying on inscrutable human operators who type “|- Human(Socrates)” into the keyboard under God knows what circumstances.

We don’t yet even have a strong grounding in why neural networks work, afaik (feel free to correct if I’m off here, not fully up to date on the field). Like, lots of cutting-edge methods have no rationale beyond “lol it performs better on MNIST / CIFAR-10 / ImageNet.” And the complaint that knowledge bases are complicated? Has he ever seen training data? Data are, in general, awful.

If you believe that connectionist AI is a simple, reasonable thing, take a look at some of the examples of neural networks being tricked into making utterly bizarre image-misclassifications. They’re fantastic.

So I’m just mentioning this little historical note [that going from connectionism to backpropagation took 17 years] about the timescale of mathematical progress, to emphasize that all the people who say “AI is 30 years away so we don’t need to worry about Friendliness theory yet” have moldy jello in their skulls.

He’s talking about how progress in AI can be much slower than expected… in order to argue that we should be worried about immanent general AI? I mean, I know his point is that it would take a long time to develop “Friendliness theory,” but still, he’s undermining his own point.

(Doubly so because this is an essay about how general AI needs to come from a radically new idea we don’t even have yet, so what he proposes to do is develop a theory for an AI system we not only don’t have, but also don’t even understand the basis of? It’s like asking someone who doesn’t know what a stock is to help you write regulations for high-frequency trading.)

This robot ran on a “neural network” built by detailed study of biology.

lol he’s complaining about people getting sucked into incorrect neural network hype but he’s bought into the “neural networks are totally biology” hype.

(Hot take: we should ban machine learning researchers from using the phrase “neural” until they can explain, at an undergraduate level, how a synapse works. They should also be banned from saying “artificial intelligence” until they have a consistent definition of what “intelligence” is, which, given that nobody does afaik, should take a while.)

Actual success with NNs only really happened when people *stopped* trying to mimic biology: Hebbian learning got replaced with backprop, good networks are almost entirely feedforward or with simple recurrence, and sigmoid/tanh nodes got replaced with ReLU and ELU units. Biology has not really been a good guide for what works in this field (in particular, we have no idea how to leverage whatever secret sauce makes brains so good at unsupervised learning). For fooling networks, also check out [adversarial patches](https://arxiv.org/abs/1712.09665) (small stickers that force misclassification of essentially arbitrary objects) and [adversarial examples in the physical world](https://arxiv.org/abs/1607.02533) (which shows that visibly-identical adversaries are physically robust, capable of fooling a novel system after being printed out and photographed).
But that might suggest that the major differences between artificial NNs and biological neural networks (like Hebbian learning vs. gradient descent, or synchronized oscillations apparently playing a major role in biological brains but not in most artificial NNs) are important in understanding the major limitations NN models have relative to biological brains (not just human brains, but also less complex animal brains), like the issue of weird image misclassification problems mentioned by u/CountOneInterrupt above, or the differences discussed [here](http://www.3quarksdaily.com/3quarksdaily/2017/03/artificial-stupidity.html): >While AI systems have recently achieved spectacular successes on learning complex tasks, the learning that powers them depends crucially on five elements: 1) The availability of large amounts of data; 2) The ability of store this data off-line in memory, and to access it repeatedly for rehearsal; 3) The computational capacity to extract the requisite information from the data; 4) The time to carry out the computationally expensive process of repeatedly going through a lot of data to learn incrementally; and 5) The energy to sustain the whole process. None of these is available to a real animal, or has been available to humans through most of our species’ history. Nor have they been needed. Ideas that require great effort to understand or tasks that require a lifetime of practice to master are relatively recent developments even in human history, and are probably not a significant part of the experience of other animals outside of laboratory or circus settings. For an intelligent machine to learn chess or Go is remarkable, but says little about real intelligence. It is more useful to ask how a human child can recognize dogs accurately after seeing just one or two examples, or why a human being, evolved to operate at the speed of walking or running, can learn to drive a car through traffic at 70 mph after just a few hours of experience. This general capacity for rapid learning is the real key to intelligence – and is not very well-understood.
> We don't yet even have a strong grounding in why neural networks work, afaik there are a several competing hypotheses, like the [information bottleneck hypothesis](https://arxiv.org/pdf/1703.00810.pdf) that Tishby puts forward, but there's a bunch of [delightfully snarky academic debate](https://openreview.net/pdf?id=ry_WPG-A-) about it
I love, love, love Soatto's related work, but [I didn't have a lot of success with my attempt to implement it.](https://github.com/coventry/InformationDropout)

I don’t know anything about SI/MIRI’s proposals or results. I’ve tended to follow Deep Learning papers for the past couple of years, and I haven’t recognized any work from them, if I’ve seen it. Doesn’t mean much, though… It’s a huge field.

I agree with the overall strategy he seems to be outlining in the OP link, of mixing logical/Bayesian systems with connectionist. There are interesting contemporary architectures which involve something like that: Use the perceptual and decision-making capabilities of neural nets to drive some kind of symbolic or probabilistic system. I think it’s the way forward.

As I understand, the symbolic and connectionist approaches were not seen as mutually exclusive until there was the split between "east coast" and "west coast" cognitive science.
Personally, I'm a big fan of "flyover" cognitive science.

You seem to be sceptical about AI-safety-research-by-amateurs rather than AI.