r/SneerClub archives
newest
bestest
longest
Something about a ‘true’ AI being able to ‘cogito ergo sum’ relativity, from this very sub! (https://reddit.com/r/SneerClub/comments/92vqyt/_/e4eoezk/?context=1)
20

Yudkowsky crap is often more cleverly self defeating than it appears on the first glance; the core principle of GR is that you wouldn’t be able to tell the difference from the case of camera accelerating upwards and apple standing still, a case that is simpler described with Newtonian mechanics. (Leaving aside the fact that the simplest way to encode acceleration is v+=a; p+=v; and that for just 3 frames simply storing position is probably the simplest thing you can ever do). It’s like he’s smart with a minus sign.

>*the core principle of GR is that you wouldn’t be able to tell the difference from the case of camera accelerating upwards and apple standing still* This is a slight misunderstanding. The GR you're talking about there is Galilean relativity, not general relativity. The core principle of general relativity is that it is impossible to tell the difference between gravity pulling the apple downwards and the *room* accelerating upwards.
That’s what I mean, the camera being the room in that case (since only an apple and a camera are mentioned). Basically the existence of gravity does not even enter consideration when it comes to the shortest encoding of 3 camera pictures.
> the shortest encoding of 3 camera pictures A priori, cogito ergo sum, barely readable JPEG twitter memes.
I think over time they goalpost shifted their idea to where its vague whether it's merely considering infinitely many hypotheses (GR inclusive), or concluding galaxies, planets, dark matter, and GR from the looks of the apple. While the original was a fairly straightforwardly stupid idea that the shortest universal TM program producing 3 pictures of accelerating apple would involve GR. 3 because 2 points could be un-accelerated motion, and 4th picture doesn't really add anything (unless you're considering rotations in which case I'll grant you that a bunch of pictures, more than 3 definitely, would maybe give you some understanding of tensors of inertia, but you'll still need enough pictures to overcome the sheer cost of encoding any kind of algorithm for computing anything, vs simply storing values. TMs and other models of computation used for formalized induction aren't very expressive, I'd bet it is actually a lot of pictures, but of course how many is incomputable as well, so we get to a fundamentally stupid claim where the best defense is "it's incomputable and I might be right"). Maybe they need a neologism for this, jellomanning...
At these scales, the differences between the results of the Galilean and Lorentzian transformations are both qualitatively and quantitively indistinguishable. The AI hasn't got enough information to form any physical heuristics. Let alone a complete mathematical description.
But at best that's Galilean relativity versus special relativity, general relativity is still absent.
Well indeed, but from human length and time scales, identifying that the spacetime isn't Minkowski is a *serious* challenge - notwithstanding the information restrictions that we place upon our AI system. If it could not derive special relativity from 3 frames of normal footage - then it most certainly could not derive general relativity.
Noone ever seems to bring up maybe three frames of an object passing behind a black hole. That would at least sound somewhat more reasonable.
Certainly more plausible, but still profoundly unlikely unless the AI is presented with more information. In particular, the AI has no reason to make the assumptions we would. If you told me it derived the mathematics of the apparent deformation, then I would be tempted to believe you. But to derive the underlying physical theory is *much much* harder. The AI needs to use those three frames to come up with the concepts of spacetime, mass, curvature, momentum, the various relevant conservation laws - and so on. I just don't think that there is enough information to do that.
Coincidentally I've just come from reading Greg Egans *Incandescence* where aliens on a bronze age level of technology figure out the basics of general relativity thanks to the coincidence that their home world is in a tight orbit around a supermassive black hole where the effects are far more pronounced and even with the naked eye distinguishable from Newtonian gravity. The most interesting argument of the book though was that they approach it from a quite different angle than we did historically - throughout the novel they never even approach the notion that mass is the source of the curvature tensor but they just accept the geometry as a given. They also never develop the concept of a "force" as we do but immediately see weight as an effect of an object not following it's "natural path" (what we would call a geodesic). You don't need the concepts of mass, momentum and many of those conservation laws if you have other symmetries that can stand in for them. But yeah, the "three frames" is an obvious extreme exaggeration that vastly underestimates just how much prior knowledge and assumptions we have to use in our thinking just to get by.
Ooh, interesting, I love Greg Egan's stuff, that's one of the ones I haven't read yet. I will be honest tho, that is only a partial uncovering of GR - to properly call it general relativity, the link between the mass distribution (or more properly the stress-energy tensor) and the curvature must be present and understood.
Well the leading hypothesis should be the shortest representation of those 3 images, and I seriously doubt that GR would be in any way involved in that, because you can fairly compactly describe the resulting distortion without going too general. edit: Then there's the practical issue that it's going to be undecidable what's the shortest representation, due to Halting Problem and very "long running" hypotheses. The shortest that completes in limited time, where limited time is 20 billions years using every atom in the observable universe... well that's still practically undecidable plus on top of that no good logical reason to expect it to be particularly more compact than what you'd get with brains and pens and paper and computers. The length of longest known busy beaver grows very slowly with available computing resources, as in, doesn't grow.
Honestly, I'm not sure if it would even end up rating newtonian gravity that highly. For newtons equations to work, there needs to be an object offscreen (the earth) with a centre of mass a distance away that is orders of magnitude greater than any distance the webcam has encountered, and with a gargantuan mass. A webcam doesn't even have a way to measure mass, so one of the key variables is forever off reach! A simpler explanation for a ball falling would be that the ground is a charged plate and that each ball has a charge proportional to it's size. Or that the laws of the universe are: balls fall at 9.8 m/s2, flat surfaces stay still. Or, as noted above: there is no gravity, the ground accelerated upwards.

“And to this end they built themselves a stupendous super-computer which was so amazingly intelligent that even before its data banks had been connected up it had started from I think therefore I am and got as far as deducing the existence of rice pudding and income tax before anyone managed to turn it off.” – Douglas Adams

“The a priori is greatly neglected. Logic is very powerful.” – Kurt Gödel

I can’t read this discussion and not think, what a fantastic waste of energy it is to build a wonderful supercomputer and then just show it pictures of apples in order to settle an internet argument.

Curve ball: the superintelligent AI reproduces transcendental idealism.

It does it like this: - AI finds simplest explanation for observations - AI finds trillions of more complicated explanations for its observations. - AI realizes that it has no truly grounded way to justifying a preference for one explanation over another, because it doesn't accept a-priori that simpler explanations are more likely.
>AI realizes that it has no truly grounded way to justifying a preference for one explanation over another, because it doesn't accept a-priori that simpler explanations are more likely. To be fair, the invalidity of this assumption is pretty much the groundwork for the entire history of science. Occam's Razor has proven ineffective time and time again in the face of Newton's Flaming Laser Sword.

If the AI were so smart, it would know a quadratic can be fit to any three points, and therefore it would have the humility to know it needed more data.

Nah the ai is super turing and just pulls the correct answer out of an [oracle machine](https://www.newscientist.com/article/mg22329780-400-turings-oracle-the-computer-that-goes-beyond-logic/). (A machine that gives the correct answers). Checkmate sneerclub, god-ai wins!
uhh, i think you'll find that if you count each pixel individually, there are thousands of points! If it just digs deep enough into pixel cluster #3648, gen rel will pop out easily!
Well if you take a non digital picture the colors are real numbers so the information is theoretically infinite, so you are correct, just dig down deeper. Hell it is even easier, pi is infinite, just give the ai pie.
A finite number of photons will reach the camera, and the things that react to the photons (photoreactive molecules, rods and cones, CCD cells) have positions that are pre-determined and not changed by the light, so in any case the information in a photograph is finite.
... i wasnt serious. While the 'reals' contain more info than you can encode in digital thing is true, it is obv bullshit to look at it like i did. I was just using some hypercomputation concepts wrong on purpose.

How many times have we defeated this exact argument in this exact manner? It’s exceedingly trivial.

Never, in the history of the internet, or maybe even in human history, has an argument gone away after being on the losing side in one debate.