r/SneerClub archives
newest
bestest
longest
“If AI can finish your sentences, AI can finish the world” — Yudkowsky’s Maxim (https://i.redd.it/qq75yvoicmea1.jpg)
84

I am starting to see why actual AI researchers think this guy is a ding dong.

Do they? Genuinely asking. I'd love to see some examples of that.
I'm sure they *would*, but the big disclaimer here is "if they think of him at all" and afaik they generally don't.
Unfortunately from hatebrowsing LW, it seems a bunch of frequent posters do actually work at AI labs, especially OpenAI and Deepmind and such. Not sure how much these guys worship Yud, but they sure are Rationalists and endorse him enough to post regularly on his dumb forum.
Hm. So I guess my next question would be exactly what "work at" means, and whether it's in the same sense I could say I "worked at" a nationally known CDC agency because I actually did have a job doing assorted data-cleaning and tech scut-work for said agency and the director apparently is a recognized authority in his field. Then again, even recognized experts can be remarkably dumb when venturing too far outside their expertise so I guess anything's possible
Yeah, I think that's a valid question. You obviously could also make an argument that the employees/heads of AI labs aren't really even "experts" in AI, just people trying to monetize it as quickly as possible. This [post](https://www.lesswrong.com/posts/3S4nyoNEEuvNsbXt8/common-misconceptions-about-openai), its comments, and its "thanks for the help on this" include a bunch of LWs that at least work*ed* for OpenAI - it seems like there's quite a few of them. From some cursory googling, a lot of these people work on the "alignment" team, which I really wish they would just call the "AI safety" (a real thing people should care about) team. I only really think about this crap because it's kind of alarming that the people that should be addressing real problems, like making AI be less racist, are instead getting paid tons of money to think about how we're going to get turned into paperclips within 5 years.
> "AI safety" (a real thing people should care about) Yup, but honestly, I'm increasingly convinced that if humans are getting melted down to paperclips five years from now, it's going to be as a consequence of economic and political decisions that leave human health and safety in the hands of algorithms too *stupid* to do better, because it was good for someone's bottom line and the humans harmed by it were too disenfranchised to put up an effective resistence. That's alarming because it's a depressingly plausible scenario, and this sci-fi silliness is... not helping.
Capitalism is already the paperclip maximiser and it only required human intelligence to achieve
That would be akin to porn for this sub. lol
he's a nobody in the AI world. he's just useful for amusement on subreddits like this.

…what

Does he believe that AI will learn to break hashes?

And is the proof basically “it exists on the internet, therefore AI will learn how to predict it”?

How can someone who thinks is that smart can be this dumb?

I guess I’ll put online a list of <program, halts-or-not> pairs so that our future AI will learn to compute uncomputable things. You’re welcome, Basilisk!

Yeah "a programming language where a function is preceded by the hash of its body" was my go to example of a language that cannot be learned by reinforcement learning. Funny that Yudkowsky thought of the same model but instead his brilliant mind leapt to the conclusion that this means the acausal robot god could break one way hashes.

Thanks, I needed an explicit proof that Yudkowsky has no real idea how machine learning works.

I used to think he was very smart. That was when I was 14 and deep into reading HPMOR. Over a decade later... hmm, maybe there's a reason none of these people are actually respected in the fields they're so worked up over. *Just maybe*.
Or hash functions. The whole thread rests on the premise that there is no meaningful difference between `f(plaintext) -> hash` and `f(prompt) -> response`. If one is computationally irreversible, then the other must be also. If Yudkowsky were asked to invent an unbreakable encryption algorithm, he'd probably come up with ROT-13.
Did you ever hear about the use of the word "phyg" on lesswrong?
Hahah, I had forgotten about that ...

Does… does he think that you can learn how to break encryption across the board by looking at a bunch of hashes??

“I asked ChatGPT whether P was equal to NP and it said yes, what’s so hard about this computer science lark anyway?”
QED!

This fucking guy, cooming over T9 predictive text.

For those missing the howler:

A hash function typically converts any string of characters, any length, into a different string of characters with fixed length. For example, the dumbest possible hash function, with only two possible outputs, would be “Is the number of characters even (0) or odd (1)?” Anyone running the same hash function on the same input will get the same output, which makes it useful for various computery things. However, any given output necessarily represents an infinite number of possible inputs, so by definition no amount of cleverness or pattern recognition or big data can help you learn how to reverse a hash function. It’s like saying AI will learn how to divide by zero.

Many hashes are reversible in principle (especially if you're looking for inputs that e.g. look like sensible English text), just computationally difficult to reverse. I doubt chatgpt will do it - it's not really what it's good at - but it's not prima facie impossible.
sure but i think the point is that there a bunch of hash functions that arent reversible, so why speak so generally about AI being able to do magical things in all circumstances?
there was a whole argument several years ago where people who knew anything about anything tried to convince the sort of rationalist who says "but a *sufficiently advanced* AI ..." that even an arbitrarily intelligent AI probably couldn't actually do computationally infeasible hash reversals just by being sooooper intelligent i'm pretty sure the rationalists didn't actually take this on board tho
I'm curious about the Venn diagram of "rationalist-type-people who are sure a sufficiently advanced AI could..." and "rationalist-type-people who think it's self-evidently stupid to imagine a sufficiently advanced contraption could overcome thermodynamic entropy"
INSUFFICIENT DATA FOR MEANINGFUL ANSWER.
the second contradicts the word of the Sequences, in which Yud literally comes up with an AI *so* intelligent it can find new physics that mean the Yud emulation can live forever. so it can't be a *true* rationalist
brb gone to ask ChatGPT how to turn lead into gold
I don't think it's overcoming entropy; it's just a sci-fi mindset that the AI will either find a flaw in the hash function or develop some unexpected quantum computing mumbo jumbo that can break it. It's not quite "AI will figure out FTL travel" but it's the same mindset.
yeah, I didn't mean to stretch the analogy that far, tho, just to imply that "we will discover the perpetuum mobile" was an earlier era's "superhuman AGI is right around the corner".
least surprising thing I've heard all week
Since half of them think we're living in a simulation, maybe they think the AI will just brute force the hash in (from our perspective) an imperceptible amount of time. Sounds at least as plausible as anything else I've heard!
Yup. Informally "hash" usually means "one-way hash", but there definitely are two-way hashes, i.e. algorithms that are reversible by design, so unintentionally reversible ones are possibile too.
> However, any given output necessarily represents an infinite number of possible inputs To be pedantic, this isn't true. We can define a hash function that maps the string "Epistaxis" to 0 and every other string to 1. Then an output of 0 represents a fixed number of inputs.

some people are correct to fear being replaced with a very small shell script

At least shell scripts can be useful. It's another thing to be put out of a job by a computer program which excels at one thing only: forming grammatically correct, superficially plausible, but ultimately nonsensical utterances.
Yudkowsky, the human form of :(){ :|:& };:
Don’t give me hope

I get what he’s trying to say, but it’s wrong in that typical confused Yudkowsky kind of way.

He’s saying that sometimes you have an association A -> B where it’s easy to deduce B starting from A, but hard to deduce A starting from B (that’s why he uses hashes as an analogy, an example is B being a hash of A). So it would be easier for a language model to output the sentence “A therefore B” than the sentence “B because of A”, because in “A therefore B” the model has to predict B when it has already predicted A thus having it in its context window, whereas in “B because of A” it needs to predict A when it has B in its context window which is harder.

We actually have seen that when transformer language models try to do a long-wided deduction (e.g. a math calculation), they tend to do better when they are prompted to show the step-by-step deduction than when they are prompted to show the correct answer right away.

But the model doesn’t actually have to be SUPER intelligent to output “B because of A” out of the blue. It can just produce “A”, “A therefore B” and “SENTENCE FLIP” in its internal representation and then output “B because of A”. This is likely the same way a human would go about producing a sentence like this, so no, the AI doesn’t have to be more intelligent than a human to predict what a human would say.

> This is likely the same way a human would go aboout producing a sentence like this Eh, that's a reasonable expectation but I think what we do know (so far) about natural language confounds it -- i.e. the procedures and internal representations used by the speaker that produces a meaningful natural-language sentence and the listener that accurately understands it don't necessarily have much to say about each other at all, and certainly can be different in all kinds of ways without affecting the success of the communication. So, in the first place there's no reason to think that a measure of how smart an algorithm has to be to predict any given text will yield a meaningful measure of how smart it is *relative* to the text's producer. And ultimately I'm not even sure how he's defining "intelligent" and "predict" here, so there isn't much to engage with.

Can someone reshare those posts where he makes strong predictions about nanotechnology and nanobot apocalypse? To me, that just solidified the idea that he likes finding the newest tech and extrapolating an inevitable apocalypse from its development

I too, would like to see these posts
[here's one](https://web.archive.org/web/20081021051739/http://yudkowsky.net/obsolete/singularity.html)
> Plug in the numbers for current computing speeds, the current doubling time, and an estimate for the raw processing power of the human brain, and the numbers match in: 2021. > Created in: 1996 No wonder he put it into an "obsolete" category and you can only access it through archive, lol.
I know this is one of them: https://web.archive.org/web/20081021051739/http://yudkowsky.net/obsolete/singularity.html

i think he just likes saying “hash”

Is this the same thread where he says that GPT could have developed the theory of relativity if it were trained on all existing books in 1911, therefore singularity?

This thread is dated last week. He said the stuff about relativity long ago, I think somewhere on Less Wrong, (or at least in the Less Wrong/Sequences heyday.) He also at some point, I believe, claimed that an super intelligent AGI could derive relativity from two frames of a film of something falling. The triple Omni AGI, but only two of them are guaranteed.
"[A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis...by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass.](https://www.reddit.com/r/SneerClub/comments/dsb0cw/yudkowsky_classic_a_bayesian_superintelligence/)"
no he did not did he??

“More generally, it’s just not true that you only need to be as intelligent as a human to predict exactly what a particular human will say.”

The idea that any kind of intelligence (whatever that means) would be able to predict exactly what a particular human will say just seems totally insane to me. People are spontaneous! Sure a super intelligence might get it right some of the time based on past experience, but I don’t know how it could possibly predict asdo;fihjasd;of jsaff

I knew you were gonna say that

Yud has “interesting” ideas about what “intelligence” means, example 30382.

what??? what is he even saying??

That a fully functional ChatGPT will be able to accurately decipher all one way hashes. This is the kind of algorithm that is commonly used to actually store people's passwords, rather than storing them in an unencrypted or decryptable manner. When a user provides their password, the hash function is called and the resulting hash is compared to the stored value. If someone gets a hold of your DB of user IDs and passwords, they still can’t log in, because the stored password hash does not itself hash to the hash. Nor does the owner of the data actually know your password. It would not surprise me if he was on the way to making an argument that a super AGI would just be able to log in as anyone on any system because … pREdiCtiOn!

Predictable