r/SneerClub archives
newest
bestest
longest
Yudkowsky attempts to solve a non-computable problem with proof-by-metaphor (https://www.reddit.com/r/SneerClub/comments/12jb0t8/yudkowsky_attempts_to_solve_a_noncomputable/)
100

LessWrong post: GPTs are Predictors, not Imitators

In an attempt to probe the ineffable mysteries of ChatGPT, Eliezer Yudkowsky poses the following question:

Imagine yourself in a box, trying to predict the next word […] for all the text on the Internet. […] Is this a task whose difficulty caps out as human intelligence, or at the intelligence level of the smartest human who wrote any Internet text?

The correct answer is “neither”. And indeed that’s the answer that Yudkowsky arrives at, sort of: the original question is a clever ruse in which we have been set up to fail, so that a greater truth can be revealed to us by one of history’s greatest thinkers.

Being unfamiliar with any of the established literature in the field of computer science, though, Yudkowsky’s answer is simultaneously too short and too long, and ultimately a copout:

GPT-4 is still not as smart as a human in many ways, but it’s naked mathematical truth that the task GPTs are being trained on is harder than being an actual human.

What does “harder than being an actual human” mean, exactly? Yudkowsky doesn’t seem to know, as indicated by his only response in the comments section to one of his thoughtful critics.

So what’s the real, correct answer to the original question? Consider the following somewhat more technical rephrasing of it:

What is the shortest computer program that can generate a given sequence of symbols?

This is a well-studied question whose answer has a well-known name: Kolmogorov complexity. It is, in general, known the be non-computable.

This is a lot like if Yudkowsky had asked “which would halt execution first: ChatGPT, or a human?”, and then gone on at length about metaphors involving rabbits and hares or something. The average undergraduate computer science student, by contrast, would immediately recognize that question as being ill-considered. Asking that kind of question is often an indication that you have fundamentally misunderstood the larger task that you’re asking it in service of.

As mentioned above, one commenter gives a thoughtful response explaining why Yudkowsky is fundamentally wrong about his approach to this, and Yudkowsky rewards them with petulant dismissal, as is his way.

A human can write a rap battle in an hour.  A GPT loss function would like the GPT to be intelligent enough to predict it on the fly.

Lmao the linked commenter also caught this but uh… Yudkowsky telling on himself a bit here assuming people have to write rap battles in advance

Big yud has never watched “whose line is it anyway” confirmed
I mean, you have the set-up, the introduction of the relevant characters and the setting, the rhythm of the thing itself, and you have to make sure the two styles are distinct enough that it should be obvious whose rapping when without dialogue tags. To make it a good scene we need to know the respective motivations and the history between the rap combatants. All of.that is quite a bit of work :p

Also, much like your grandma’s facebook posts, Yudkowsky apparently uses inexplicable capitalization rules: he always writes “Mind” instead of “mind”.

I'm being charitable in assuming it's inexplicable; the obvious alternative is that he's using the word in the same way that Iain M. Banks' Culture novels do.
ugh lets try not to associate Banks with Yud, it's too depressing
I mean he does give off big Bora Horza Gobuchul vibes, just determined to murder machine intelligences at all costs for no particularly well thought out reason
Thiel has Veppers vibes. He'd absolutely make a company to host digital hells for human emulations, and probably be fine if it just barely broke even.
Look, from a utilitarian point of view it's worth it to host a few digital hells where billions of sapient minds undergo unimaginable suffering if doing so allows you to create a future where 10^50 equivalent minds live eternal lives of everlasting bliss. He'll definitely get around to that latter part someday!
you know you explained it right. saying they "larp" is not an insult, it's documenting their actions: they play it so straight to wish the fantasy into reality. instead of doing the work they cut corners, beg the right people: it's to avoid breaking their "immersive experience" of being players in the techno-scientific game. yud 's ego is so gigantic he would probably prefer suicide over admitting he is lazy.
>he always writes "Mind" instead of "mind". Huh? In the piece you linked, he specifically switches between "mind" and "Mind" to differentiate between the human and AI minds. Literally the first instance of the word is lower case. How could you read that piece and say he always writes "Mind"? I honestly can't tell if this sub is a parody of people making bad criticisms or if these are legitimate attempts lol. Like seriously. And to answer your question: >What does “harder than being an actual human” mean, exactly? From the text, it seems obvious that he's saying it's harder for a non-human to predict what a human would do than it is for a human to do it. Anything a human does is "something a human would do", but there are a lot of things the AI could do which don't fit "things a human would do". To give a slightly imperfect analogy, birds naturally know when and where to migrate. It's very easy for them to do because it comes naturally. For a human to replicate this, it would take more work and research to figure out what to do, because they lack the natural instincts, and would have to make up for that with extra work/thinking. Similarly, an AI needs to do extra prediction work in order to "act" like a human, to really think "what would a human do in this situation?" then do that. A human just needs to do whatever they want. Another analogy: Say you need to improvise a scene. Would it be easier to do it as yourself, or as some historical figure? I'd wager yourself, because there's research and practice that would go along with personifying a historical figure, while you can just do whatever you would normally do if you were playing yourself.
[deleted]
Regardless of how convincing they would be, the way they acted would, by definition, be a way in which they might act. But if you want to split hairs with the analogy, take the stage acting out of it: You are to go to the grocery store and grocery shop just like you normally do. Then you are to do the same, but shop like your neighbour would have done. Which is easier? The first. Obviously. You’re just doing what you do naturally. Whatever you do is, by definition, what you would do.
[deleted]
Hahahaha “it’s a bad analogy but I won’t expand on it just take my word for it it’s DUM!” Still can’t tell if this is parody or not.
It's not that it's dumb, it's meaningless semantics. The bar for behaving like a human is so low by your reckoning that all you need to do to act human is to be a human. Have you never spoken to a person online and wondered if you were speaking to a bot? Have you never seen or heard of a person behaving in a way that, to you, seems inhuman? Do you think every person has the same standard for 'inhuman' behaviour? Have you never thought, of a piece of art, 'where is the humanity?' I had a friend who had a cat that behaved like a dog, it did things that we typically associate with dogs, was that cat behaving like a cat, or a dog? Why? I mean christ, have you never known a person who one day started acting out of sorts and about whom you thought, that person is acting out of character. Or do you just consider the actions of everyone you know to be in character because they are being performed by that person? These questions are simply meant to make you think about what 'behaving like a human' actually means, both to you, and to others and why this is the case. It's a pretty broad subject and can be approached from a number of directions. I would wager, based on the fact that your baseline for self identification is to observe oneself doing a task that is so predictable that supermarkets have departments based around utilising that predictability as a means to increase sales, and that you think this is an appropriate analogy for defining self and by extension human behaviour, suggests that you haven't actually thought much about it.
>I had a friend who had a cat that behaved like a dog, it did things that we typically associate with dogs, was that cat behaving like a cat, or a dog? Why? If the cat had been specifically trained to act like a dog, then it's acting like a dog. Or, if it's unsuccessful, it's acting like a cat acting like a dog. *Both* of those options are more difficult for a cat to achieve than merely acting like a cat.
Sigh, the cat analogy was only supposed to make you question what constitutes behaviour as ascribed to different entities. Saying that it is difficult for a cat to act a certain way is absurd. What you mean to say is that there is a way in which a cat would act naturally, and a way in which a cat would act when trained a certain way. But even that line of thinking is flawed because there is no one character which defines a cat. Maybe you're right and you are just being dumb, or deliberately ignorant.
[deleted]
I countered your poor expansion and you backed out immediately lol. Now you’re literally just calling me dumb. The definition of ad hominem.
> The definition of ad hominem mayhaps sir you have impugned me with an ad debate club!! please desist from this line of weird and bad contribution to the sub. thank you.
[deleted]
>after granting my point Lol no, after changing the analogy because you were splitting hairs on irrelevant aspects of the analogy... >you defaulted to a different example which by your lights at least works better in favour of your argument No, I changed it to a slightly-altered example which takes away your ability to split hairs to the same degree. And when you realized you couldn't find a hair to split to avoid engaging with the analogy, you said "Nah this is dumb, you're dumb (not quick enough), I'm not even debating you, debating is for losers, I don't even care if I'm wrong, god you're arrogant!" lol. I'd also add that saying "You only used that analogy because it helps to prove your point!" isn't the own you think it is lmao.
[deleted]
>I am a different person than you are, and float blissfully free of these “debate me” constraints which chain you to the illusion that conversation only exists to win dumb internet arguments Read: "I can make whatever point I want while being *incredibly fucking condescending* and if you try to actually engage with my argument I'll just run away and say I wasn't even trying anyway plus ur dumb!" >This was never about you, or your argument Yeah weird, why on earth would I expect that a reply to my argument would be at all related to me or my argument. Fuck me eh?
>Yeah weird, why on earth would I expect that a reply to my argument would be at all related to me or my argument. Fuck me eh? The point, sir, is that this is a Wendy's.
And my point is I'm at a Wendy's, and I ordered a burger and fries, and someone was like "Oh there aren't any fries" and I was like "But I see fries right there," and they were like "Look I don't even work here dummy don't expect me to deal with this stuff." I made an argument in response to another, someone replied to me, and then they were baffled and incredibly condescending when I thought their comment actually had something to do with my argument.
perhaps sir can return to this wendy's in a week, if at all
Yall are fucking savage XD
[deleted]
>so that when I continued to behave as if it wasn’t there And also *immediately* called me dumb... >If I were to grant any of the premises, deep and shallow, involved in order to get to your level I’d have done myself a disservice.
[deleted]
Considering I was responding to one of the worst critiques I've ever seen, which is proven false within the first couple paragraphs of the piece linked at the top of the post, and had been heavily upvoted, I think it's giving this sub the *benefit of the doubt* to wonder if it's parody. Otherwise, you have a sub full of people who can't read. Post-ban edit: Lol it's not "Alright when I do it", the point is that I wasn't insulting you personally, I was just wondering about a sub in general, and then you immediately insulted me personally lol, *while critiquing an aspect of my argument*. You did everything to make a person think you were arguing, then acted exactly like someone evading an argument would act, with tons of shitty condescension. You gave every reason to believe you had been arguing. Just like you think I need to rethink how I write, you need to do the same.
You know what, after rereading the thread, I see where you could be coming from. I could see how someone might take issue with the analogy and merely want to make a point about it that they happened to have in their back pocket, even though the issue you took wasn't actually relevant to the argument, and derailed the point. Perhaps you weren't intending to argue after all. (But the condescension from you fucking reeks)
[deleted]
>I have a very strong, very personal, distaste for your presumptuous “debate me, bro” attitude. You mean the attitude where you assume someone responding to your argument is responding to your argument? Yeah man these fuckin' assumptions. >I think the fact that you can’t see how the way you talk to other people should rightly prompt condescension from them speaks well to that distaste. The IRONY holy shit. You called me intellectually beneath you in your SECOND comment to me, then got all shocked Pikachu when I continued to assume you were arguing based on your shitty condescending attitude.
[deleted]
>I think the fact that you can’t see how the way you talk to other people should rightly prompt ~~condescension~~ assumptions that you're arguing from them speaks well to that distaste. "I don't understand why you wouldn't take my word for it that I wasn't arguing, just because I took issue with your analogy and called you intellectually inferior for holding the position you do!"
> Huh? In the piece you linked, he specifically switches between "mind" and "Mind" to differentiate between the human and AI minds. Literally the first instance of the word is lower case. Sorry, I must have overlooked that. That settles it then: he's using the term in the sense of Banks' Culture novels, which is appalling and stupid. It's cool when Banks does it because Banks knows that he's writing science fiction. It's dumb when Yudkowsky does it because Yudkowsky doesn't realize that he's writing science fiction.

Guess a problem of Yuds way of being an autodidact is not picking up any of the relevant literature and reading it yourself. Think Claude Shannon might be a good start.

Anyway the thought experiment is misformed because if you are in the box like chatgpt you should also have an abridged/compressed copy of the whole internet with you. Which chatgpt has. (Op is slightly wrong here in linking to Kolmogorov complexity as iirc that is lossless way, while chatgpt is lossy, but still weird that Yud doesn’t talk about information theory at all).

Now if you were to plan to be put into a box and take something with you to reproduce (lossy) text from the internet and you cannot take the whole internet with you the problem is actually not hard. It is in fact very simple. You just make chatgpt, train that on the internet and take it with you. ;). We are a species of tool builders after all.

The whole ‘make words up’ argument is also a bit weird, esp as that is a known weakness of GPT. If you ask it to make up a new sentence it cannot really, while for humans we just throw some grammatically incorrect shit together and boom, brand new sentence (An LW expertise! ;) ). I get that isn’t what he is talking about btw, I just wanted to mention this GPT inability. Anyway, the whole make up random words thing is also odd as making up random numbers is right there, and a lot more predictable due to how keyboards are layed out, of course that wouldn’t make it predict a specific human, but just all humans who didn’t know that you should use a random number generator to make up numbers. (iirc this was a flaw of the Qanon posts, where the numbers were clearly random picked this way)

The reply is also a bit off btw.

For example, the human loss function makes some people attempt to predict winning lottery numbers. This is an impossible task for humans

People have predicted winning lottery numbers. If the method used to generate the lottery numbers is flawed. Which has happened, in fact globally there are enough flawed systems that somebody worked out which numbers are more likely. (this will not help people personally obviously, it is just somebody writing a few fun math science papers). And iirc people have also broken specific lotteries, but that might also be insider attacks or just other flaws in the system.

I also don’t agree with Yuds conclusion. Due to chatgpt having a lossy compressed version of the internet, it is both an imitator and a predictor. It uses prediction to create an imitation of the internet. It is both! (And wow this ‘stochastic parrots’ line really got under his skin).

Anyway, DNA can build a full human being. A human in a box cannot build another human being. What does this mean about the Mind of DNA? Mu.
I am just skimming and am not actually contributing anything meaningful, but I thought it would be funny to point out that \*2\* humans in a box can in fact, make another human
Big if true! ;)
It'll be a lot of fun testing the hypothesis.
The implications of this are enormous though I hope you get that. If 2 can make 1 person, those 3 can then make 1.5, those 4.5 can make 2.25, those 6.75 can etc. Unlimited growth potential!
I personally agree that Kolmogorov complexity is not the best way to think about what ChatGPT is being designed to do, but that's my point: Yudkowsky is none the less framing it in those terms without even knowing that he's doing it. He should have noticed this *immediately* and then changed course.
ow yes indeed.

Imagine yourself in a box

🎵 Imagine yourself in a box on a river

🎵 With tangerine trees and marmalade skies…

The argument Yudkowsky makes here is one he’s made several times, and it has the same fundamental mistake. He considers an ML model whose goal is to predict the next token in a variety of everyday contexts. He correctly observes that to accomplish this task perfectly, the model would have to be capable of impressive things like being able to invert hash functions, or simulate the inner workings of human brains.

Yudkowsky clearly wants the reader to take from this that predictors are magic and will be able to invert hashes in the future, which is absurd. (Though he stops short of actually reaching this conclusion in the post.) The obvious correct conclusion is that predictors will never be able to perform the task of prediction perfectly due to limits on computational power.

(By the way, Kolmogorov complexity is not relevant here. The relevant formalism for discussing hashes is the notion of a one-way function; or the discussion could be rephrased in terms of NP-complete problems rather than hashes.)

Kolmogorov complexity is relevant here because it's exactly the problem Yudkowsky is describing: he's asking about the smallest computer that can recreate a given string of symbols (in this case, the total of all content on the internet). The thing about hash functions isn't right. Given a dataset, you can trivially implement a function that can perfectly predict the text that follows any given substring by just looking up that substring in the dataset and then returning the text that follows it. This function can't invert hashes, of course. Even if you were to somehow create a model that can generalize perfectly outside of the dataset on which it was trained, it still wouldn't gain the ability to invert hashes or do other magical things. The totality of all human conversation does not include hash inversion. If you were to create a model that can perfectly predict any string of text from any other substring, irrespective of the source of data (e.g. not constrained to human conversation), then yes such a model would be imbued with magical powers. But that's basically just a tautology and it's hardly worth discussing, and it's probably not even a coherent idea anyway.
Hes an idiot who never bothered to learn how any of this neural network stuff (that is older than he is) works. For one thing chatgpt is huge, on the order of “entire internet compressed lossily” huge. For another thing it is also nowhere near computationally powerful enough nor deep enough to replicate when evaluated what went into creating its dataset (which includes not only human mental processes of very many people but also the real world events humans described) If you train a neural network on a hash function that is too complex for it to represent, trivially, it not only won’t build a hash function implementation within itself it won’t even contain any part whatsoever of that hash algorithm. Cryptographic hash functions involve a lot of rounds of a simpler function and are easily just too sequential for some shallow parallel function to represent - not to mention their inverses had not yielded to techniques far more sophisticated than any variety of gradient descent.

“As Ilya Sutskever compactly put it, to learn to predict text, is to learn to predict the causal processes of which the text is a shadow”

This kind of mistake is fundamental; it’s precisely to predict the shadow.

This is a map vs. territory type problem, right? Do the sequences not say anything about that?
Even more fundamental is his misuse of commas. Isn't this guy meant to be a writer?
We can all use an editor. One of the pitfalls of blogs and substacks.

Sorry, but I’m lost somewhere in the middle of your post. Why would Kolmogorov complexity being incomputable, or the Halting problem for that matter, make Yud’s questions ill-considered?

Being incomputable doesn’t mean it doesn’t have a truth value. Kolmogorov complexity of a given language is a number, a well-defined one. A machine always either halts or doesn’t, and if two of them halt then we can ask which does it in fewer steps. You can’t write an algorithm that computes those answers, but the answers themselves exist, and are well-formed. You can ask the question.

In particular, the problem of “does this particular machine halt” is not uncomputable. It’s uncomputable to answer “for any machine, tell if it halts”. If you ask whether ChatGPT halts the answer is… yes. Yes it does. You could look at the source code (if it was public) and analyse it to arrive at that conclusion. We can use various formalisms like Hoare’s logic to derive a proof of halting for a concrete source code. And from how it behaves it’s clear that it always halts, simply because after some time it will just tell you “sorry, I timed out”.

The problem of “which formal language is more complex, A or B” for some concrete A and B can also be solved. For example, if both languages were regular, we can precisely tell the sizes of their respective automata that compute them. It’s not enough to say “Kolmogorov’s complexity is an uncomputable function” to dismiss such questions as ill-considered.

> A machine always either halts or doesn't, and if two of them halt then we can ask which does it in fewer steps. You can't write an algorithm that computes those answers, but the answers themselves exist, and are well-formed. You can ask the question. I agree. It's really an empirical question more so than a theoretical one. You answer the question "does this machine halt?" by letting it run and waiting for it to halt, and hoping that it does so in the time you have available. Similarly, "does model X have more predictive power than model Y?" is also, in general, an empirical question: you answer it by trying to optimize both and see how good you can make them. The fact that Yudkowsky is trying to answer these questions by way of philosophical shower-thought is an unambiguous indication that he has no idea what he's talking about. The average undergraduate student can (for example) immediately recognize when they are asking themselves something that is equivalent to the halting problem, and then they can relax their mind for the rest of their shower because they know that they're probably not going to reach a satisfying answer. It does not reflect well on Yudkowksy that he continued to ponder this for the remainder of his shower, and even published his thoughts afterwards despite them reaching a (necessarily) vague and dissatisfactory conclusion.
It's interesting to note that Yud, despite his STEM pretentions, is vastly worse at dealing with empirical evidence than philosophers who explicitly reject empiricism in their worldviews.

[deleted]

[deleted]
[deleted]
everything you’ve written here is so on point it’s making me finally want to write that essay about the tech industry’s particular brand of fascism
[deleted]
Seems like they were just motivated by your two good posts here, nothing indicates familiarity to me in that comment.
nope! I've been lurking sneerclub (by way of RW) for a long while just cause the posts are quality and fun to read, and it's always good to be able to sneer at [rationalist talking points](https://youtu.be/cuxZ2u8-WXg) (artist's rendition) in case they recruit any more of my friends but generative AI and the assholes behind it have unfortunately inserted themselves fairly heavily into my work, and sneerclub's takes on the tech are the best and most accessible I've found anywhere. this stuff gets under my skin enough that I want to write about it, if only as a safety valve so I don't get too snippy at work
[deleted]
my thoughts exactly
>There is no reason they don't have some trained, small model to check for "is this an exam? what kind of exam". Then contract out to an economics phd to slot in the correct answers, and then finetune/train against that until the ai gets at least a passing grade. In fairness to OpenAI this is absolutely what they're doing and they're not even trying to hide it. It's not a bug, it's a feature. It's not something that happens *live* \- like, there isn't a Ph.D. student on the other end who responds when you enter prompts for ChatGPT - but they are pretty explicit about the fact that they hire people to create specialized training data for everything they want their models to do. For example, they hire programmers to do high quality solutions to Leetcode exams, and this is what makes ChatGPT good at writing code. My preferred conspiracy theory regards the reason that they aren't releasing their models any more: *maybe it's because they're basically the same as GPT-2.* They want us to think that there's some special new modelling magic going on, when actually the only substantial difference between subsequent versions is improvements to the training data.

I don’t think predicting text is quite the same as computing its Kolmogorov complexity. However, if we model human beings as Turing machines, then the problem of predicting any nontrivial semantic facts about the future behavior of humans is undecidable by Rice’s theorem.

If we restrict the problem so that we are only interested in looking for solutions we can verify quickly, then it becomes NP-complete. In fact, most of the problems that would associate with “intelligence” are going to be NP-hard in the worst case. So, in that sense, all intellectually hard problems are basically the same; they all efficiently reduce to solving elaborate 3-SAT instances. But I think Yudkowsky is using the word “hard” in a slightly more loose and informal way.

For example, it is known that, in general, finding a solution to a Mario level is NP-hard, but we don’t usually consider playing Mario games to be especially hard in practice. Though, by this standard, I’d still have to somewhat disagree with him. It intuitively doesn’t feel like there’s much of a difference in difficulty between predicting when I will make mistakes and preventing myself from making those mistakes.

Although, I think he’s right that GPT can do a bunch of amazing things that humans cannot do, but it seems like a leap to go from this to these crazy godlike AGI conclusions. Computers already regularly solve problems that humans consider informally “hard”. It would take me ages to accurately compute 2^(1/50) to more than a few decimal places, but a computer can do it in seconds. Likewise, it would probably take several human lifetimes to solve a 3-SAT instance with over a million variables, but computer 3-SAT solvers have been able to do this in days or weeks.

Predicting text is not the same as computing is kolmogorov complexity, but I think that asking about the *smallest computer program* *that can predict a given text* \- which is what Yud is doing here - is indeed the same as asking about kolmogorov complexity of that text. Like, if you can accurately predict the text that follows from any given text fragment, then that's necessarily equivalent to being able to reproduce the entire text. And I think you're really overthinking this anyway. Yudkowsky doesn't know what NP-complete means, or any of this other stuff. He speaks in loose language because he doesn't know even the most basic mathematical facts that are necessary to talk about these problems in a sophisticated way.
He doesn't mention anything about the "smallest" program to reproduce a piece of text, and I don't think this is the stated goal of LLMs like GPT. The machine isn't trying to reproduce existing bodies of text as efficiently as possible, it is trying produce original text that looks like it was written by a human. If we replaced humans with some other simpler and more predictable machine that produces text, then the language model problem could become much easier even if the problem of compressing the text is still very hard. Also, I think Yudkowsky must know at least a little bit about computational complexity theory. He's acquaintances with Scott Aaronson and it also would be very surprising if complexity theory has never ever come up in all the years he's been writing about AI. He's not stupid or ignorant just because he may be wrong.
He explicitly asks about the least-intelligent agent that can accomplish the task (emphasis mine): > Is this a task whose **difficulty caps out as** human intelligence, or at the intelligence level of the smartest human who wrote any Internet text? The whole point of his writing here is that he's trying to figure out if ChatGPT is dumber, equally smart, or smarter than a human. I don't think Yudkowsky knows anything about computational complexity theory. I'm not aware of any evidence to suggest that he does, anyway. He talks about these things as if he knows nothing at all about them. I have no idea what he's thinking when these things come up in conversations with people who actually know what they're talking about. I suspect that he deliberately avoids such conversations, and that when he can't avoid them he does his best to pretend that he knows what people are talking about. Maybe he even believes that he actually does understand what's going on, despite that clearly not being true.
In that quote he's explicitly referring to the problem of predicting human text, the language model problem, and he's asking about the (informal) difficulty of that problem. Like, could a "super intelligent" computer do better at this problem than a human. Again, I don't think this has anything to do with Kolmogorov complexity. Regardless, this is undecidable and so there should be no limit to how good a machine can get at predicting human text. That is, no machine can solve this problem in general and for any machine G, there should always exist another machine G' that out performs G at the language model problem. And you're right that his writing seems to reflect a total ignorance of this. He also has a persistent issue of talking in vague terms about "intelligence". Like, why couldn't a machine be very good at predicting human text but still be outperformed by humans at other kinds of problems? I think you and I mostly agree, but I just don't think Kolmogrov complexity is the best model for the problem he's talking about.

Yudkowsky is familiar with Kolmogorov complexity (of course). He has written about this.

You do agree that the more intelligent you are, the easier it will be to predict text, right?

I fail to see any valid criticism here.

>Yudkowsky is familiar with Kolmogorov complexity (of course). He has written about this. He's written about lots of things he doesn't understand.
The more data you have the better your prediction, I don’t see how this is a measure of intelligence.
I think it depends on how you define intelligence.
[deleted]
I don't disagree with any of that. I hate that they used that 1994 definition of intelligence in the sparks of AGI paper. Their definition is definitely incorrect and racist (biased at best). That said, I don't want to throw the baby out with the bath water.
[deleted]
?
Focusing on the part thats actually interesting, no I don’t agree that intelligence makes you better at predicting text. For example, even if you’re arbitrarily intelligent, you simply cannot predict the continuation of a fragment of a constructed language you’ve never seen before. You simply do not have enough information to determine the relevant grammatical rules, let alone figure out the root words you’d need (because they’re not represented in the tiny fragment you’re trying to continue). Even with a relevant training corpus, it’s easy to make rules which are too complicated to be fully represented in a training set of some fixed but arbitrary size. Like I could encode arbitrary information using vowel harmony for example. And I don’t think that’s a hypothetical problem, in fact I think it’s worse in the real world because we reasonably only put certain kinds of things on the internet. It’s a channel of information which I claim cannot accurately convey a complete model of human behaviour at very least and no amount of intelligence (whatever that means) can fix that. Edit: And for a much simpler example, no amount of intelligence can allow you to accurately predict a random string.

Is there a meaningful distinction here between ‘prediction’ and ‘imitation’?

I’m not at all deeply informed about this topic. It comes off to me like he’s getting hung up on semantical differences between these two words that don’t really matter with regards to the larger converstion.

No, there isn't. The approximation of any dynamical system is equivalent to attempting to predict its future state given its current state; "imitation", "simulation", "prediction", etc are all fundamentally the same thing in this respect. Yudkowsky can be forgiven for not realizing that, since some professional researchers seem not to know this either, but (as you say) he should not be forgiven for trying to cover that up by playing semantic games instead.