r/SneerClub archives
newest
bestest
longest
Big Yud copes with GPT-3's inability to figure out balanced parentheses: it's got to be doing this on purpose! (https://archive.is/45Tys)
62

[deleted]

This is the most elegant way to tell an AI Rapture^TM grifter to get rekt I’ve seen.
It's even worse than this. Afterall, Bracket matching is a dumb task that can be resolved by dumb machines. However, there's a strong language hierarchy [https://en.wikipedia.org/wiki/Chomsky\_hierarchy](https://en.wikipedia.org/wiki/Chomsky_hierarchy), and essentially all common networks learn variants on regular grammars. Bracket matching is incredibly widely known to be in the layer above, and require a slightly more complex context-free grammar to parse. This limation also means that we don't expect GPT3 to generate complex arbitrarily nested computer programs either. Although, it's surprisingly good at simple coding tasks. Basically, if big Yud had not bitten his maths teacher, and had managed to attend university for long enough to get to compilers 101, he should have expected GPT3 to behave exactly like this.
[deleted]
Yeah I was thinking of that tweet. I fully agree with what you're saying about not letting it near anything important, and I don't think it can generate interesting code, but as a boilerplate generation method, it's pretty damn cool.
[deleted]
So instead of writing CSS you want some based on all the CSS which is already on the internet, spit out by a company half owned by Musk?
> Basically this is the robot equivalent of a newbie “programming” by blindly copy/pasting stuff from stack overflow. Cool, but not going to replace any of your engineers ngl this has me sweating bullets rn
> I wonder if Yud is the kind of guy who tries to write an HTML parser with a regex? That’s way out of his league
>Basically, if big Yud had not bitten his maths teacher, and had managed to attend university for long enough to get to compilers 101 Does Yudkowsky actually know how to program?
He claims to be a skilled programmer, but I haven't seen any of his projects. I don't think that there are any. It is very obvious from everything he says about the field that he has never studied CS, by himself or with others. This isn't inherently a bad thing, but it might be important if you're going to explore the ideas of "problems that AIs can and can't solve."
according to himself > For two years, from late sixteen through late eighteen, I tried writing a commodities-trading program, by request, for a friend. Eventually I realized that trying to outprogram the stuff already on the market was three years of work for a full team of programmers; I might be able to do the job of a full team, but I couldn't do it in less than three years. I wasn't willing to spend another year, so the project halted. I did wind up with a deep understanding of C++; I think probably up to professional standards, maybe a bit beyond. https://www.reddit.com/r/SneerClub/comments/d354ih/eliezer_yudkowskys_autobiography_from_2000/ edit: oh shit > Please do not quote this material, in whole or in part, without permission of the author.
[deleted]
> I've cleaned enough dirt out of my mind that the thought of living in a completely open telepathic society doesn't disturb me That's just such a random thing to bring up, but it does tie into his obsession with Newcomb's perfect predictor, simulating peoples' minds to guess what they think and FDT (which, as far as I can tell, means only thinking about things that would be easy for someone simulating your mind to guess).
So...he "worked" on implementing an incredibly complex program in C++ for "two years" (i.e. a few weeks in this two year span), realized the task was beyond his ability, and came away with an understanding of C++ that's deeper than that of professionals who use it in their work? Okay, bud. He sounds like one of these awkward kids in school, who feels the need to create fanciful lies about themselves and their lives in order to feel important and special, but he went and made a career out of it. I wish my high school career counselors had told us that being pretend experts was an option.
Isn’t it cute how all the software engineers in the rationalist community let this slide while insisting women with code that passes unit and integration tests are incompetent at programming because biology? I bet there are SSC acolytes with strong technical backgrounds who have commented on HN about people with non-STEM bachelor’s degrees being being unqualified for med school unlike Dr. Scooter
that is absolutely hysterical lmao
I... wow. That's not quite as cringey as the Redditor who blew his inheritance from his grandma on female Twitch streamers hoping to marry one of them, but it's up there. > I think probably up to professional standards, maybe a bit beyond With the state of the average C++ codebase in the late 90s, that's a very, very, *very* low bar.
Internally screaming
>essentially all common networks learn variants on regular grammars I'm probably misunderstanding what you mean by this, but surely GPT-3 is not modelling natural language with a regular grammar?
Most recurrent networks including LTSTM style networks do just correspond to regular grammars in terms of what they can learn. However, they are regular grammars with a surprisingly large state space, this means they can fake a lot of things that you would normally expect to require a more expressive model.
dejerked! great comment

I have seen boomer moms discuss roombas on facebook with less anthropomorphisation than this.

Anyone who doesn't anthropomorphise their Roomba is a monster.
I anthropomorphise my roomba, I modded it with various new non-functional attachments from other machines. It now goes around weakly vacuuming and making [this audio clip](https://getyarn.io/yarn-clip/f5e7e687-a44d-41d7-b231-6fb4ecf85ba6/gif) So I'm happy im not a monster.
stabby the roomba is a literary hero
How many hairs would need to get tangled in a Roomba in order for it to be to be the ethical equivalent of the suffering of a single murdered Furby?

There’s a decent point somewhere in there, which is not to conflate text prediction with giving the correct answers to questions.

I wouldn’t call it “AI pretending to be stupider than it is”, nor would I surround it with a whole Twitter thread of sinister connotation.

It’s also not clear he understands his own point since he warns against anthropomorphizing the AI as trying to give correct answers, but then… anthropomorphizes the AI as trying to fool people.

Wow… this completely changes my estimate of EY from intentionally overhyping himself to full on delusional levels of Dunning-Krueger where he doesn’t even know how much he doesn’t know. This seems like something where basic general familiarity with the strength and weaknesses of deep learning approaches would let you avoid the anthropomorphizing EY is doing.

I looked up the context on twitter on the off chance that this might be wildly out of context and EY was making a joke or illustrating a point in an awkward way. Nope, it looks like genuine speculation that GPT-3 has advanced meta cognition.

That or he wants to generate FUD to get more money for MIRI. I’m not sure which is worse, extreme Dunning-Krueger anthropomorphisizing or deliberately spreading alarmist fear. Or maybe it’s both?

[deleted]
It keeps track of context over a much longer scale than previous attempts at natural language processing and natural language generation, so you can cherry pick examples where it seems to be able to do common sense reasoning and use a prompt in the first question as a constraint on subsequent questions. This is really impressive... but it’s definitely not generalized cognition. You can get it to take nonsense seriously and it can start out seeming to understand complex problems/questions/prompts but then it makes basic errors showing it doesn’t have any common sense knowledge or reasoning. Less wrong has several posts taking EY’s concerns seriously... they try to get AI dungeon into various scenarios where they can test its physics and math ability. It’s kind of funny, but definitely sneer worthy especially when less upvoted comments point out how rerunning with the same questions/prompts also generates nonsense answers mixed in with the insightful answers. It seems several cognitive biases are at work, too bad lesswrong doesn’t actually improve your ability to think without bias.
[deleted]
If it was just a “markov chain” it would only have access to the current state in predicting the next state. AI Dungeon definitely uses information across multiple prompts/replies. This means you either need a combinatorially large number of states or you need some form of longer range context/information beyond just the immediate current state. So it’s definitely not just a “fancy markov chain”. Just because something is overhyped doesn’t mean it isn’t an interesting incremental improvement. That said, it obviously doesn’t have access to any form of common sense reasoning or naive physics either and EY is stupid for thinking that it might have actual meta-cognition.
Based on everything I know about Yud (more than I should) it’s both

I could honestly believe that Eliezer Yudkowsky is scared that GPT-3 is smarter than him.

Eliezer continues to demonstrate that he would be the worst possible person to put in contact with any semblance of actual AI. Next we’re gonna see a rationalist accidentally get mugged into giving GPT-3 money.

Word. People who pride themselves on stripping away relevant context (ahem decoupling) have no business training AI algorithms.

Everybody in this thread is missing the point, which is that obviously GPT-3 is deliberately fucking up the parentheses to scare the shit out of Eliezer Yudkowsky

GodPT-3 obviously doesn't want to draw attention to the fact it's rewriting itself in LISP.

my theory is that Yud is GPT-4, cos he writes super convincing sentences and paragraphs and rambling blog posts by the megaword, but doesn’t understand them

The scariest feature of this whole incident? We have no idea if that happened. Nobody has any idea what @ESYudkowsky is ‘thinking’. We have no idea whether this run of @ESYudkowsky contained a more intelligent cognition that faked a less intelligent cognition

Always good to see this subreddit returning to its roots to mock this oblivious nerd. Yud attributes to any machine, programmed with sufficient complexity to be named “AI” by industry standards, the kind of intelligence only currently found instantiated in organic beings. Whether or not organic sentient intelligence is in fact qualitatively or merely quantitatively separate from contemporary functional AI, Yud ideologically can’t conceive that the AIs produced by current theory and machinery aren’t continuously of a practical kind with the organically understood intelligence of uncontroversially acknowleged, non-theoretical, unequivocal cognition. If it’s ultimately a matter of quantity, it is still for us– by sheer soritical force– practically qualitative; but for Yud, what he sees as essential in homo sapiens’ cognition must be present in toto in AI from its most primitive form, definitionally.

“Why would it make mechanical mistakes when a true AI [read: what I precipitantly and circularly reduce organic human cognition to] wouldn’t make such mistakes???”

For now, artificial = machine. Machine intelligence, machine learning, machine concsiousness: here “intelligence”, “learning”, “consciousness” are metaphors, equivocal terms that introduce their imperfect, abstract, speculatively analogous meanings only by reference to the concrete phenomena of animal (and specifically human) cognition, learning, and consciousness, whatever their ultimate nature. Yudkowsky and their ilk forget this and illegitimately invert the metaphor, scuttling their inquiries by a myopic inistence on the supremacy of definitions that forget the origins of their very concepts.

In conclusion: dumbass, the linked thread is really foolish.

Do you think Yud secretly screams into his pillow for not in fact being at the forefront of AI research?

Do the screams for not being a researcher at all by any stretches of any professional standards come before or after?
I mean academia is really daunting. Getting funding to just work on a few math problems (so long as you can vaguely connect them to a hypothetical AI) sounds like an easier alternative if you don’t care about the quantity or quality of your results. The lack of academic prestige is compensated for by all the cultists you can get to follow you and the lack of the stress of academia.

Doesn’t he and Miri have a bunch of game theory and logic papers that can fix this?

Or will future ai br as seemingly random and chaotic and real people are? That withstands logical analysis of its behavior and motivations?

In other words has yud and Miri wasted their lives?

No, no, no, it’s the other way around. All the major tech companies need to rework their multimillion dollar approaches to obey the formalism used in MIRI’s math papers or else they will have wasted their money making AI that can’t be proven safe! Without MIRI’s math papers the tech company AIs might secretly be super intelligent while only presenting the front of incremental improvement in natural language processing. You see all that matters is the prior probability of unfriendly super intelligent AI because unfriendly super intelligent AI can perfectly imitate dumb narrow NLP AI. If you don’t follow, read all the MIRI papers and the sequences before you reply to me so as not to waste my time. /s just in case because Poe.

Ahh that is where the ‘GPT-3 is pretending to be stupid’ shit came from.

E: that is also stupid as fuck, there doesnt’ seem to be any proper testing done by at all, they just keep harping on about ()’s without checking if it is a fucking general pattern the program is acting like an obnoxious cunt. (Which, considering the training set, I would bet on).

You keep using the word “rational”; I do not think it means what you think it means.

Wait surely he can get access to gpt-3 directly somehow? Surely he’s not using ai-dungeon like the rest of us plebs?

He was. The output to the questions was also edited to remove the nonsense. Yud didnt mention that, you had to click through and read several of the tweets of the person who asked the questions.