r/SneerClub archives
newest
bestest
longest
Eliezer speculates that AIs are bad at math proofs because earth mathematicians are bad at math (https://i.redd.it/vryeab7jr42a1.jpg)
113

Man Who Is Afraid That Super-AI Will Destroy World, All Its Simulated Inhabitants: “I Don’t Understand How Computers Work”

Those things go hand in hand. There is a reason the long term transhumanist science fiction uses computonium to store all the humans in, and not Turing machines.
Are you saying you think human consciousness is fundamentally not simulable by a Turing machine? That seems implausible to me. (Of course, our existing AIs are nowhere near what I would call "sentient".)
If that is possible is still very much an open question. And we know turing machines are bound in their capabilities, and there are in CS theory machines that can do more than turing machines (which are either called superturing or hyperturing, sorry it has been 20 years, so I forgot this. At the time me and some fellows students were researching the research on this, and it was all interesting (even if most of the papers seemed to focus on/around the work of iirc one woman at an Israeli University). And these superturing machines are also in classes of themselves, there are things a superturing machine can't calculate but a supersuperturing machine can etc. (You will have to forgive me that I don't recall, nor could I find a few years ago when I looked for it for a short while, the math/logic which proves it). And the various levels of superturing don't have to be that interesting btw. A computer that can solve a NP computer in P time would already be superturing. (Assuming P!=NP of course). So human consciousness (if it even exists!) might not exist on a Turing level but on one of the Superturing ones. (I have no proof of that, and I will not be so boring as to pretend I could understand what Penrose was trying to prove with physics and refer to his book as proof that the mind is indeed not calculable/uses quantum mechanics or whatever it was (I also assume his 1989 book is superseded ;) I also have not kept up with any of that, I just knew I failed at understanding the physics/math used to build the initial argument)). So, without real proof, I think it is likely it is not turing simulatable in a meaningful way. I wish I was wrong of course, because living forever (or at least for as long as I want) sound pretty alright to me [IF we turn into some form of democratic socialist/communist post scarcity civ of course. Not under fascism/capitalism/authoritarianism]. Now to come back to the superturing thing. The fun part was 20 years ago, they didn't even predict these various classes, they also had various steps of reaching the next level. One of these theoretical constructs (to me it always felt like it was included for completions sake, not as a serious plan but more a 'perfectly spherical cow' thing) was an Oracle machine. Which just magically gave you the correct answers of the next level. [That is why I had no words when I saw this topic.](https://old.reddit.com/r/SneerClub/comments/zq741s/how_to_develop_an_advanced_ai_step_1_start_with/) (well, I had words but you know what I mean). It is like I could feel that one Israeli researchers thoughts come back to me again via Leverage (I assume they have found similar research papers). But anyway, that is why I think it isn't as plausible AGI can do human consciousness. But my knowhow about this is out of date, and so it is all feels over reals if you know what I mean. E: ow look a wiki page: https://en.wikipedia.org/wiki/Hypercomputation (I have not read it recently, but iirc this talks about the idea I was trying to talk about here). Using computonium in SF is a good way to sidestep all these issues of course.
Sure, I know about hypercomputation and oracles and all that. I just don't personally think that you can actually build them in the real world; the laws of physics as we know them can be effectively simulated on a Turing machine, and I don't personally believe that we'll find some weird loophole around that.
Guess we are going to have to disagree on that one. :Shrug:
I honestly think all his ravings just come from an insecurity over how similar this new A.I. is to replicating his own completely superficial intelligence. Especially given the obsession with I.Q. and intelligence as a virtue and an identity, dude must perceive this shit as such a threat. I guess it's not surprising that the A.I. Luddites would be those who identified themselves too closely with the perceived strength of their brain-as-machine. Not even the original luddites self identified with their bodies-as machines in that way. It was really more of a labour movement and protest against working conditions.

In a reply, Someone explains exactly why EY is wrong:

A better way to put this might be to say that when bullshit is called, it’s usually before it hits publication or is done via editorial retraction. Neither is probably in the corpus.

Only for EY to double down:

I’m skeptical that this hypothetical work gets performed wonderfully, somewhere conveniently hidden from sight, if there’s not sufficiently systematized instruction about it to appear in an LLM dataset.

There are plenty of "I call bullshit" manifestos that actually do get published and sometimes even result in retraction of the bullshit, too. They're just published as Matters Arising or Letters or whatnot, so you might actually need to be a human being who knows about academia rather than a computer algorithm trawling the Articles in order to get something out of them.
That or they appear in private correspondence, which is usually only published after the author's death (if ever). You could probably go through the archival material of someone like Erdös and find examples of them "calling bullshit", but that would require actual effort rather than just relying on corpora compiled by other people.
if information can't be automatically processed by an algorithm does it really exist? but yeah [amazing what you find in academics' private correspondence](https://magazine.scienceforthepeople.org/online/the-last-refuge-of-scoundrels/)
Most examples of "calling bullshit" on *total* bullshit will be during conversations over coffee or beer about the crank e-mails we all get. (I receive at least one a week just because I've published papers with "quantum" in the title.) It's very rare to write anything down about those, since there's hardly ever any benefit to doing so. The crank is unshakeable in their convictions — or, without loss of generality, *his* convictions, since crackpottery is even more gender-biased than actual science, for whatever reasons. Writing a lengthy debunking won't impress your tenure committee. If anything, it will occupy time you could be spending on something that will. Occasionally, someone will write down a thorough call-out as a webpage. That seems to happen when there is an additional motivating factor, like the crank [being well-funded enough to take out advertisements](https://web.archive.org/web/20090511031258/http://hep.ucsb.edu/people/bmonreal/Null_Physics_Review.html).

What is he asking for? Peer reviewed mathematical periodicals should publish incorrect proof assertions, then call them out?

edit:

“I theorize this to be unironically true” is a good sneer club flair up for grabs.

edit 2:

https://twitter.com/ESYudkowsky/status/1595914251511529477

EY doubts that peer review happens, considers its existence “hypothetical”.

> EY doubts that peer review happens, considers its existence "hypothetical". So, all the people who claim to be working for journals, acting as reviewers, etc are just what, lying? Engaged in a vast conspiracy? It's always interesting to discover that the thing you spent months of your life working on was all just a ruse being perpetuated by "Big Academia"...
I once worked for a smallish startup that got a lot of attention in conspiracy theory circles for a while. That was quite a trip, seeing how powerful they thought we were, as compared to the shitty python CSV parsing code I actually wrote.
Oh man you can’t just dangle that
Way too small a company, I’d be identifiable
Damn if my job in a smallish startup was the focus of conspiracy types I’d fuck with them to the maximum possible extent up to and including ‘leaking’ source code with ominous variable and function names and a false git history.
Way too busy putting out fires of our own making. That was a fucking awful job.
Whoa, a CA drone in the wild?
He seems to have decided on the spot that there should be a Journal of Peer Review, the absence of which is "convenient" to someone, for some reason.
I am once again reminded that Yud blocked me before I had a chance to be directly mean to him on Twitter and slightly saddened I could not tell him he was stupid personally.
If twitter actually fizzles out, you should be able to get some jabs in after the migration to mastodon/hive/whatever.
Before that happens I think the feature which actually blocks people will break. Which will be a fun moment. (I already read somewhere that people managed to post while actually in timeout)
Oh shit, is he over there? I hadn't even thought about that, I could still call him a dumb shitlord
I don't think he's over there yet, but if twitter goes down, he'll definitely find a place to post tweet-sized thoughts.
The problem with mastodon for him is that the sneerers already beat him to the migration so he’d get endlessly dunked on as soon as he joined 😎
Or tweet-sized thoughts that somehow take him 10,000 words to articulate...
Nice

Dude who actively avoids academia has no idea how academia works and as usual, won’t let that ignorance stop him from making grand pronouncements.

couple of decades ago i thought he could actually make something that would prove he got something worthwhile to contribute even if he have no formation whatsoever in any of the things he talk about. yep, still waiting. actually i stopped waiting now i sneer. back then i remember i wrote him an email urging him to do so, that was like 20 years ago. he had some excuse like "none of them know what they are talking about" etc
[deleted]
I found that account interesting because it means Eliezer has interacted with the peer review process at least once, which makes his denial of it in the OP’s tweets puzzling. I would have assumed Dunning-Kruger ignorance on his part, now it seem to me he is being deliberately deceptive in order to put down the mainstream academic process.
[deleted]
Obviously the so called “peer review” process needs a prediction market /s
hah. doing what he accuses others of doing , where it counts the most. many such cases.
He has updated his prior to say that the grapes were sour anyway.
I thought you were going to cite David Chalmers' report that "I've tried to get very smart colleagues in decision theory to take the TDT/UDT material seriously, but the lack of a really clear statement of these ideas seems to get in the way" or Rachael Briggs turning down a $20,000 grant to give TDT/UDT a write up meeting academic rigor, after preliminary discussions led her to conclude that "it would be unlikely for her to produce an article that would be satisfactory to both her and SIAI." *This* one's got detail though!
Weird thing is I’ve met Schwarz, so it was amusing to come across him talking about Yudkowsky in the negative at the same time as I’d been doing my /r/SneerClub thing without having the notion to mention it
It is always a bad sign when some imagined genius goes 'they don't know what they are talking about' and not 'because I didn't follow the official education path I will use the wrong terms making me hard to understand by trained experts'
> 'because I didn't follow the official education path I will use the wrong terms making me hard to understand by trained experts' This simple statement is the whole dataset Yud dreams of training a god-AI on
That’s an extremely succinct description of something I’ve been feeling for nearly 10 years.
Sort of seen it happen a few times with self taught programmers.
Peak autodidact

Garbage in, garbage out, as the bivalve said

‘Education doesn’t call bullshit’

And never will he think of a good faith reason why that would be so.

Clearly teachers should start suplexing more children, or making cringe compilations about their failures on youtube.

> 'Education doesn't call bullshit' Said by a man who has clearly never attended an academic conference or even watched one on YouTube. Or observed academics on social media. Or done anything but admire his own beautiful image in his mirror tbh
He had his own bullshit called out all the time though lol.
Not by real scientists/educators clearly, those people calling him out are the ARJ (Anti-Rational Jocks).
A good fifth of the lab meetings I go to are just the presenter of the day explaining why a paper they've found is misusing a methodology or basing important results on a graph that doesn't show what they think it shows.
Yeah, tbf, my comment isn't totally correct, he said 'doesn't teach calling bullshit' I just extrapolated that into 'teaching teachers to call bullshit'. But it is all pretty dumb, esp as he started Rationalism (according to his own words) because he discovered that people don't think correctly (aka, have biasses) while trying to do AGI safety research. But even that, teaching people to think correctly is super hard. So he should realize that calling out scientific bullshit is showing people why it is bullshit (as just calling 'bullshit!' isn't helpful) is a lot of work, and it doesn't help the carreer of any educators/scientists to just go shouting/explaining at cranks (iirc there was a physicist who wanted to start a company which cranks could pay to have them go over their 'totally not cranky real new physics' shit wonder what happend to that). This is one of the other reasons why his remark is a bit silly.

See this remarkable blog discussion, on how Galactica should’ve been expected to spew cleverly disguised bullshit. In fact it was expected to do just that, by experts other than those with blind faith in AI to generate meaningful text.

Some of the errors described makes it sound like the developers made zero effort to hand code Galactica or preprocess any of Galactica’a training corpus or hand code post-processing algorithms… For example, making up incorrect references/citations is fixable if you have the AI working with bibtec entries and articles preprocessed to label citations not the raw text of articles. Edit… you know, an AI that could take a couple of paragraphs from an in-progress scientific article, process them, and then recommend possible citations would be handy tool but not revolutionary…
> 4ch data would cut more to the point Have I been missing the corner of 4chan where they discuss mathematical proofs in depth?
/sci/ has the occasional PhD shitposter even though normally it is full of schitzo posters. There was one notable example where an anon helped solve a 25 year old math problem speculating about an anime: https://www.wired.com/story/how-an-anonymous-4chan-post-helped-solve-a-25-year-old-math-puzzle/
4chan is such a weird place. It’s like finding out that a diamond mine was also used as an open pit latrine for a city with a recurring cholera problem. There are diamonds down there, but…
I have to go on there with the belief that 80% of the posters are larping and steelmanning crazy ideas so they can reenforce and add clarity to their views. At least that is what I do when I post there. But I know the number is lower than that and most of them are fucking nuts.

“I theorize this to be unironically true” is the most useless sentence ever written. Yeah no shit, that’s how saying something works.

Your puny Earth mathematicians cannot realize that every odd composite number is the product of two odd numbers greater than 1. Such knowledge is above mere mortals.

Some people find this man profound.

He was right - it’s not in the text dataset. But idk how he made the moon logic jump to “that’s because humans don’t call each other on bs”

Wait, was he really expecting a GPT-3-style text generator to give a right answer?

from now on all of my tweets will begin with “I theorize this to be unironically true:”

So AIs can prove the earth is flat?

If he seems to think people are just generally always wrong/irrational, why does he think training models will ever lead to a general artificial intelligence?

It is like they assume everyone acts the way they do. Publish incorrect bullshit derived from first principles without anyone other than them proofing it or doing a long deep dive of other literature and work,

GIGO. Garbage in, garbage out.

He may be right in that most instances in the training dataset of “What’s a proof of [X]” are followed by something that looks like a mathematical proof, but it’s an absurd leap of logic to assume that this says something about the education system not training people to spot bullshit, rather than just an overlooked property of the training dataset. (This model was apparently trained on “scientific knowledge”–you can see how the dataset may have ended up with more paragraphs of the form “What’s a proof of [X]? [Proof of X]” than of the form “What’s a proof of [X]? Sorry, there is no proof of [X] because [X] is false.”)

I like that yud saw a block of mathematical looking gibberish and immediately assumed it was not only correct, but also so groundbreaking that it would overturn thousands of years of math.

You’d think that a “public intellectual” like him would ever consider occams razor

...that's not what he was saying at all. He was criticizing the fact that when asked to prove that all odd numbers are prime, it generated a fake proof instead of something like "There is no proof that all odd numbers are prime, because it is not true." The questionable part of it is when he claims that the reason it is unable to "call bullshit on bad questions" is because there are few examples of that in the training data, and that that's because modern education systems don't train people to spot bullshit. You can criticize Yud on a lot of things, but not understanding basic math isn't one of them.
I see no positive reason to credit him with understanding maths

I unironically believe that one of the reasons these people believe the magical god machine will be invented soon is that they have convinced themselves that they are nothing more than computers and that you can see this by the fake robot way they talk.