r/SneerClub archives
newest
bestest
longest
Yudkowsky classic: “A Bayesian superintelligence, hooked up to a webcam of a falling apple, would invent general relativity by the third frame” (https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message)
62

So this is a typical rationalfic short by yudkowsky trying to convince people of the AI threat, but contained within is the most batshit paragraph I’ve seen in all of his writing:

Riemann invented his geometries before Einstein had a use for them; the physics of our universe is not that complicated in an absolute sense. A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.

I invite you to actually look at a video of apples falling on grass. I’m not sure you could even deduce Newtonian gravity from this image. Remember, the hypothesis of newtonian gravity is that objects attract each other in proportion to their mass. The gravitational force between two 1 kg apples 10 cm apart is about a nanonewton, whereas the force of 5 km/h wind on a 10cm diameter apple is about a millinewton, six orders of magnitude higher, to the point where minor variations in wind force would overwhelm any gravitational effect. The only aspect of gravity that can be seen in the video is that things fall down and accelerate, but there is literally no evidence that this process is affected by mass at all. Hell, mass can only be “seen” in as much as its imperfect correlation with size. It’s even worse with the grass example, they are literally held up against gravity by nanoscale bioarchitecture such as vacuoles. Is the computer going to deduce these from first principles?

You cannot see wind on a webcam. You cannot see mass on a webcam. You cannot see vacuoles on a webcam. You cannot see air on a webcam. You cannot see the size of the earth on a webcam. Your knowledge is only as good as your experiments and measuring equipment. A monkey with a thermometer would beat a god-AI with a webcam if they were trying to predict the temperature.

I think this helps explain why yudkowsky is so alarmist about AI. If the only barrier to knowledge is “thinking really hard”, then an AI can just think itself into omniscience in an instant. Whereas if knowledge requires experimentation, isolation of parameters, and production of superior equipment, then the growth of knowledge is constrained by other things, like how long it takes for an apple to fall.

It really lays bare the throbbing core assumptions of Yud's entire worldview: being right is a mental trait that can be maximized, empiricism be damned. A smart enough person can just think their way to being right about things, so an infinitely smart AI-God would be right about everything even if they had basically no evidence at all to ground their rightness on. It's all very Aristotelian.
[deleted]
>I think he’s imagining that a machine like this would basically come pre-loaded with 100% of our modern mathematical notations, as if those are just essential truths lying out there that a good reasoner should just discover How'd humans discover them? > Everything about the symbolism we use to talk about the world, from basic calculus to tensor arithmetic, is infected by the fact that we invented it to describe our world, so you don’t get to use the fact that it looks simple as evidence that it is “objectively” simple in some sense. Oh, we 'invented' it. So what's the process that humans used that makes it such that it can't be discovered independently? >This is really an out-there claim he’s making - that there exists some learning algorithm so good that it can always deduce the right answer from next to zero information The literal claim he's making is that three sequential, decent-resolution photographs of a moving object actually contains a LOT of useful information but it requires a significant effort to extract and use said info, and the actual amount of useful information to be extracted is limited only by the efficiency of the learning algorithms, and we already know that the [theoretical upper limit](https://en.wikipedia.org/wiki/Solomonoff%27s_theory_of_inductive_inference) is far and beyond what most people would think. So it would be kinda stupid to assume a superintelligence' limit is closer to what humans could do rather than that theoretical limit. Like, if you were given three sequential photographs of an otherwise unknown object being simulated in an unknown world, and you had access to multiple supercomputers and were given a decade or so run analyses... are you saying you couldn't come up with some decent hypotheses as to how the physics in that world worked? And that you couldn't update those hypotheses to better accuracy if you were given additional photographs in the same sequence?
**Solomonoff's theory of inductive inference** Ray Solomonoff's theory of universal inductive inference is a theory of prediction based on logical observations, such as predicting the next symbol based upon a given series of symbols. The only assumption that the theory makes is that the environment follows some unknown but computable probability distribution. It is a mathematical formalization of Occam's razor and the Principle of Multiple Explanations.Prediction is done using a completely Bayesian framework. The universal prior is calculated for all computable sequences—this is the universal a priori probability distribution; no computable hypothesis will have a zero probability. *** ^[ [^PM](https://www.reddit.com/message/compose?to=kittens_from_space) ^| [^Exclude ^me](https://reddit.com/message/compose?to=WikiTextBot&message=Excludeme&subject=Excludeme) ^| [^Exclude ^from ^subreddit](https://np.reddit.com/r/SneerClub/about/banned) ^| [^FAQ ^/ ^Information](https://np.reddit.com/r/WikiTextBot/wiki/index) ^| [^Source](https://github.com/kittenswolf/WikiTextBot) ^] ^Downvote ^to ^remove ^| ^v0.28
Occasionally I think about what I consider the biggest tells of these guys being full of shit that I'd point to if I had to explain myself to someone, and that one long HPMOR takedown is among my top picks and this aspect of it specifically. As silly as it is to judge people based on some fanfic, I feel being written by a thought leader of the community as a demonstration of your virtues with an obvious self-insert main character should make it fair game. And my highlight was having it pointed out that the scientific method as presented by HPMOR was "observe a phenomenon, come up with one hypothesis, then assume you're correct and never test anything to confirm this". It's "nice" seeing the same thing come up in his other writing.
the best part about that in hpmor is that the hypotheses aren't even scientifically possible! like, yudkowsky doesn't even have a high school level understanding of the underlying science, and it shows
https://twitter.com/shenanigansen/status/1171085546610941952 is once again extremely relevant.
Ah, but it's Bayesian, so you're ignoring its priors!
as an acausal robot god, I use point mass priors because I already know the truth of everything.
> I think this helps explain why yudkowsky is so alarmist about AI. If the only barrier to knowledge is “thinking really hard”, then an AI can just think itself into omniscience in an instant. Whereas if knowledge requires experimentation, isolation of parameters, and production of superior equipment, then the growth of knowledge is constrained by other things, like how long it takes for an apple to fall. Well put. I find it absolutely shocking how successful Yud's grift has been, given how very obvious it is he has no technical knowledge of AI systems.
Yeah, even in basic machine learning courses they point out that it doesn't matter how fancy your algorithm is or how much data you get if the input parameters aren't useful. It's like if you tried to predict the price of a house by taking a super high definition picture of the mailbox. Sure, there'll be a correlation (fancier mailbox = fancier house), but any regular person would beat you if they knew the size of the house and the address, no matter how super genius you are.
Baseball after much more than three frames. https://mobile.twitter.com/randal_olson/status/1164562565667495936
cracking up at this
Speaking as a super AI 3 frames in, general relativity is far-fetched nonsense - just like planes and birds just move up, apples just move down.
Alternate hypothesis: apples grow at the bottom and shrink at the top, it's just what they do.
It's funny how that's a recurring theme of yud's, isn't it? That one nice critical review of hpmor notes that harry, and by extension yud, are proponents of [aristotelian science](https://en.wikipedia.org/wiki/Aristotle#Scientific_style), that is, making observation(s) and deriving information about it from thinking real hard. Within that train of logic, being able to think more about said observations simply increases the amount of information one can extract from it, leading to absurdity when positing a mind that can think thousands or millions of times faster than "human". What a coincidence that said style of thought neatly supports a cult of personality, since obviously the person who is able to think about things the hardest (that's yud btw) must be the most fit to lead the world into friendly-AI utopia.
[deleted]
Yud is about as far from a materialist understanding of science, or anything else, as it is possible to be without actually being Ludwig von Mises.
I'm thinking about this "third frame" thing in the context of an actual Bayesian. Theoretically, three frames gives you vastly more information than two as to the nature of motion, while one shot gives you nothing or close to nothing. I don't know what priors Yudkowsky wants to include upon viewing of the first frame - if you load a superintelligence with a basic understanding of what grass and water are like and throw the dog a bone with Newton's theories of motion it could probably come up with at least *something* on the basis of one frame. But this raises the question from the other direction, and it's a scholastic though interesting one: what priors would you have to have prior to the first frame such that by the third frame the complexity has so ascended to deduce general relativity? Given an appropriate formula for "superintelligent" a particularly good mathematician could probably come up with a suitable, speculative, answer. But this puts Yudkowsky in a dilemma: his whole schtick is that at present you can't come up with that formula because the nature of the superintelligence is ungraspable, so it looks like he's in contradiction with his own other musings on the matter. Anyway, thought that was kinda funny.
The funny thing is, if it already knew Newtonian mechanics, a video of two apples falling would probably be evidence *against* Newtonian gravity theory. The theory is that objects with mass attract, and yet here we see two objects with mass that are clearly not exerting any measurable force on each other. Maybe the force is weak, but there is a hugely massive object just offscreen below the ground? Ah, but we can see that the apples are falling parallel downward, not converging to the same point. The only way that the falling force could be due to mass attraction would be to assume the existence of an an object with a *gargantuan* mass, a *ridiculously* long way away, both of which are huge orders of magnitude beyond what is experienced on screen. Any sane AI would probably discount the hypothesis of "mass attraction" in favour of more plausible theories. Perhaps the ground is a uniformly charged electric plate and the apples have an opposite charge proportional to their mass? Or perhaps even the primary school physics F= mg, where g = 9.8 m/s2 is just a law of the universe? that has half the parameters of newtons law! Such hypotheses would explain the data just as well without the need to suppose ridiculous unseen objects.
I'm not with you here at all: if you're a superintelligence that happens to be loaded with Newton's theories of motion then everything you've just described is explainable thereby. It's trivial to assume that there is a sufficiently massive object just below the ground onto which the apple is going to fall if you already have the assumption that things fall due to the relevant equation. This was discussed at length in the pre-Newtonian period: Tycho Brahe proposed a distinctly odd cosmology based on the assumption that the Earth was uniquely massy, in contrast to the stars. But if you plug in Newton's formulation of mass then there's no problem, because it's enough to assume that the formulation is established and paramount, justified by its original prior - depending on what you pack in to the prior - so as to explain the falling of the relevant fruit. The question therefore turns on what you pack in to the prior, which is where the Bayesian project fails if it is to be the final and total source of all knowledge that a superintelligence has. Since the relevant priors are a black box we can call them a black box and get on with doing actual work in our lives.
Of course it's explainable (assuming the laws of mechanics means you've done almost all the hard work already). The law F = Gmm/r2 would definitely be in the hypothesis space somewhere, as would the wrong laws like (F= Gmm/r3, F = Gmm/logr), etc . However, my point is that the apple experiment would cause it to *discount* this hypothesis in favour of something like F=mg, which fills all the data with less assumptions. The key difference between our AI here and newton is that newton *already knew* that a very large object was underneath the apple, which obviously weighed a large amount, he didn't have to suppose anything unknown for the theory to work. He also had access to astronomical data, an entirely different experimental setup which worked way better with his theory. The key point here is that to figure out a law, you need different experiments, and knowledge of all the other parts of physics to build on. Newton stood on the shoulders of giants, the AI is merely trying to jump really high on it's own.
Sorry, but no. Your "would" as to what the AI "would" discount is contingent on what the AI has plugged in to begin with. This is all covered in what I said above.

Why is physics the only science with these guys?

So much armchair speculation about relativity and quantum mechanics, never chemistry or geology or astronomy or biol… actually, never mind, it’s fine with me if they just want to circlejerk about pop physics all the time.

Isnt the idea behind the ai in a box thing that the ai is so good at psychology and sociology related predictions it can manipulate you by building a correct model of your brain and test against that from just text observations?
Roughly yeah. In its strongest form the AI-in-a-box thought experiment relies on the supposition that the AI in question is so much quicker than you are that it can overpower your relatively feeble-minded powers of reasoning at a greater rate than you can keep up with, and so work out as the conversation goes on what it would take for you specifically to let it out of the box - faster than you can come up with reasons not to do so. One major problem with this strong form of the thought-experiment is that it games the rules in the AI's favour: you have to keep talking to it, you have to be acting (on a LessWrongian conception of what it would be to act) rationally, conversational persuasiveness is an f(x) of "intelligence" etc. But the real problem with the *actual* AI-in-a-box thought experiment is that Yud claims to have played it and won like a Turing Test, completely undermining the central premises of the strong version of the thought experiment by eliminating all of the above rules that game the experiment in the AI's favour.
IIRC, it's not. I mean, it would have to be to work, but they don't consider it psychology or sociology, they see it in terms of an AI being so smart it can just brute-force the game theory of every possible interaction.
Good point on saying it could be brute forced. Which kinda means they didn't pay attention during comp sci and brute forcing, but well, if you make your AI magical, why not make it able to do all the things in short times.
[This isn't inconsistent with what I've said, but it's incomplete](https://www.reddit.com/r/SneerClub/comments/dsb0cw/yudkowsky_classic_a_bayesian_superintelligence/f6ovzp2/)
Mostly because other subjects have an even more obvious reliance on lots of data. Even Yud isn't dumb enough to claim that a super duper AI could deduce the existence of octopuses by looking at three frames of a video of an apple tree.
> Even Yud isn't dumb enough to claim that a super duper AI could deduce the existence of octopuses by looking at three frames of a video of an apple tree. Well, he does claim you could deduce the *psychology* of an octopus by looking at a picture of its tentacles. Using evopsych. (well, in the story it's a hyperdimensional intelligent octopus that is simulating our whole universe...)
... I may have underestimated Yud's stupidity.
To be fair, they had a lot of octopus pictures by that time and the octopi were extremely cooperative.

The story (dodgy AI allegory) is also worth a read, at one point the super smart humans use super-evopsych to manipulate extradimensional tentacle creatures from a higher universe into hooking us up to their internet. This is all achieved by looking at tentacles on a webcam. This is meant to sound plausible enough to make you scared of AI.

It sounds like he got this idea from a hentai.
Adrian Veidt’s plotline on the new *Watchmen* series is incredible!

The superintelligent AI would invent Marxism-Leninism by the 10th frame and we would be living in full communism by the time the apple hit the ground.

I don’t think he realized how big the hypothesis space is.

The ai is just that fast the space doesnt matter.
I mean, what is EXPSPACE to a super intelligence. It would just use it's super brain to collapse down the whole polynomial hierarchy in .002 microseconds. Obviously. After all, it's a *super* intelligence.
But what about super super intelligence?

My artificial intelligence is great enough that three successive images from a porno are enough to derive all of biology, sexual ethics, and human psychology.

[deleted]
SFW but NSFL https://www.youtube.com/watch?v=XEHATUm-hMI You must understand that, as a Bayesian, I can integrate over my priors to deduce the differences between bed bugs, humans, and their genitals, so you should not be too worried. Ha ha, of course.

And of course if it fails then it isn’t actually a super intelligence after all.

Which reminds me of a point, did they ever try to classify different types of super intelligences? I know researcher(s) made classifications for machines which can do more than turing machines. (there were a few theoretical levels of what these hyper turing machines could do). This seems like basic theoretical research stuff which could easily be done and would be a good start if you are serious about super intelligences.
"friendly" and "unfriendly" and "DO NOT THINK IN SUFFICIENT DETAIL ABOUT THIS ONE"

Something vaguely (vaguely) similar to this is Incandescence by Greg Egan. Where one section is about the discovery of general relativity by a pre-industrial civilization orbiting a collapsed star.

But that has the benefit of being written by a professional author who understands the science.