r/SneerClub archives
newest
bestest
longest
44

Oh yeah, the whole yudkowsky project is openly anti-scientific, even though it doesn’t look it at first. He literally tells people to “break their allegiance to science”. He thinks that if you’re smart enough, you dont need “testing” or “experimentation”, you can just deduce your scientific breakthroughs by thinking really hard (with bayesianism), hence the idea that super-AI would instantly have magical powers.

I’ll break out this ridiculousness again as an example:

Riemann invented his geometries before Einstein had a use for them; the physics of our universe is not that complicated in an absolute sense. A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.

We’ve gone over before how this is total horseshit, but it’s a good illustration of his worldview.

Also if anyone wants proof that yudkowsky has zero idea how science actually works, check out this post where he thinks that all scientists just toss out all their ideas the first time they see contradictory data. As a scientist, lmao.

>Also if anyone wants proof that yudkowsky has zero idea how science actually works, check out this post where he thinks that all scientists just toss out all their ideas the first time they see contradictory data. As a scientist, lmao. As a scientist you have no idea how bold and courageous it was to remain skeptical of a study claiming that prayer helps with IVF.
It’s interesting that Yudkowsky’s opinion on empirical knowing by *sheer thinking about the given image* reflects a version of Aristotle’s, later explicitly borrowed by Ayn Rand, account of knowledge and perception. I wonder if there’s something to that comparison.
> where he thinks that all scientists just toss out all their ideas the first time they see contradictory data I've had this, and a quote from the fan-AI-Box write-up stuck in my head for a while: > After playing the AI-Box Experiment twice, I have found the Eliezer Yudkowsky ruleset to be lacking in a number of ways, [...] For instance, his ruleset allows the Gatekeeper to type “k” after every statement the AI writes, without needing to read and consider what the AI argues. I think it’s fair to say that this is against the spirit of the experiment, and thus I have disallowed it in this ruleset. Two very big problems with the rationalist attitude: an insistence that every argument should and MUST be listened to, and that extraordinary claims require no more evidence than completely mundane ones. No wonder that rationalists keep letting the AI out of the box.
>Two very big problems with the rationalist attitude: an insistence that every argument should and MUST be listened to, and that **extraordinary claims require no more evidence than completely mundane ones** Huh I mean, some people might say Bayes's Law is pretty much just "extraordinary claims require extraordinary evidence" expressed in math
Yes, Bayesian logic is supposed to resist extraordinary claims via stubborn 'priors', but as has been discussed in our little sneer club, rationalists don't generally apply Bayes as intended, but instead more as a screen to legitimate their claims with jargon.
>And to this end they built themselves a stupendous super-computer which was so amazingly intelligent that even before its data banks had been connected up it had started from "I think, therefore I am," and got as far as deducing the existence of rice pudding and income tax before anyone managed to turn it off.
A sneer that predates its referent by almost two decades. Douglas Adams was a king among men. In the high-performance computing world there's a concept called the "[roofline model](https://en.wikipedia.org/wiki/Roofline_model)", which at its core is the common-sense notion that you can have all the computing power in the world, but it won't do you a lick a good if you can't keep it fed with data. It's comments like this where it's painfully obvious that Yud has not merely *assumed* that human knowledge is bounded by a lack of compute capability rather than a lack of data, but is blissfully unaware that there's an assumption to be made at all.
> hooked up to a webcam bonus points for that detail
> Unfortunately, Science has this notion that you can never go against an honestly obtained experimental result. So, when someone obtains an experimental result that contradicts the standard model, researchers are faced with a dilemma for resolving their cognitive dissonance: they either have to immediately throw away the standard model, or else attack the experiment - accuse the researchers of dishonesty, or flawed design, or conflict of interest... zoom and enhance > dishonesty, or flawed design, or conflict of interest "One of these things is not like the others / One of these things just doesn't belong..."
Stating you don't believe the data and refusing to elaborate: strong, rational bayesian behaviour Actually using your expertise to propose explanations of why you think the data might be wrong: weak irrational science loser behaviour
"So we observed this galaxy that doesn't seem to fit with current models of galaxy formation and-" "LIAR! HERETIC! WHO DO YOU WORK FOR???" Everyday with this shit, man. Because we all know that if there's one thing scientists hate, it's observing new things that can't yet be explained.
The Ted Chiang story ["Understand"](https://web.archive.org/web/20140527121332/http://www.infinityplus.co.uk/stories/under.htm) and its consequences have been a disaster for the human race
I would argue that the financial and cultural success of dorky tech arseholes has been a disaster for fun sci-fi stories
It is kind of annoying that these people turned fun Stross/Egan riffs from the 90s into modern day "thought leader" careers.
and most annoying to Stross and Egan
Lol btw re-reading our discussion in there about overdetermination/underdetermination of evidence and theory I realise that two years ago I completely misread your argument: fair play!
When in doubt, prax it out 🧐😎😎
do you think his anti-science attitude has anything to do with his certainty that the many worlds interpretation is the correct one? From a layman's perspective, it is weird to see another layman (because Yud is a layman) being SO sure of something even experts have trouble with.
Yes, his views on MWI are a pretty straightforward example of his anti-scientific ideas. He thought really hard and came up with an explanation that makes sense to him, which to him means it's the correct answer and anyone who disagrees must not have thought hard enough about it.
Yeah, I think that's about the size of it. He also gives himself bonus points for declaring belief in something that sounds hard to believe. And, in arguing that science is broken, he manages to recapitulate in hyper-concentrated form all the foibles that physicists ourselves have when discussing quantum foundations: the easy reliance upon convenient labels instead of [detailed history](https://arxiv.org/abs/1502.06547); the sweetly naive belief that of all the ways to present the mathematics, the way you happened to learn first is True; etc. Linking to prior comments in lieu of a long and overly serious rant: [1](https://www.reddit.com/r/SneerClub/comments/m50kuz/why_many_worlds/gqxh9im/?context=3), [2](https://www.reddit.com/r/SneerClub/comments/po1yqv/short_of_content_lesswrong_pilots_500_payments/hd5winu/?context=3).
Its all supposed to link together. MWI is supposed to be proveable by Bayes...but not the rigourous mathematical, kind, the handwaving kind invented by Yud. Since MWI did not come first in science, science is different from and inferior to Bayes.