r/SneerClub archives
newest
bestest
longest
No Major Obstacles to AGI, Says People That Have No Idea How to Invent AGI (https://www.reddit.com/r/SneerClub/comments/117tt2s/no_major_obstacles_to_agi_says_people_that_have/)
79

Also, don’t discuss any counterarguments to this post in the comments. You might give bad ideas to the AI companies! Also, we are alignment researchers, please ignore that and definitely don’t give us money.

They are literally using that one post of the guy who deluded himself into falling in love with an AI as evidence of human manipulation.

My dude you were just lonely and projected yourself a waifu, you’re not special.

> Quite naturally, the more you chat with the LLM character, the more you get emotionally attached to it, similar to how it works in relationships with humans. I’m very happy to report in that I’ve never actually wanted to fuck Google! Guess I’m extra well adjusted.
try Sydney instead, for the Narcissistic AI Girlfriend Given To Death Threats
In a spirit of maximal charity towards AI doomers, the real danger of LLMs is that powerful people and institutions will use them to manipulate people. It’s a Dr Strangelove situation, not War Games.
Yeah we've heard of people using LLM's to get refunds and stuff from companies. The natural next step is for companies to use LLM's to make you pay more fees
I've seen people lose arguments with Markov chains before, it's not actually that impressive of an achievement.

“We, the people who are not involved in research at all, have a better understanding of the state of the research than researchers, because they are too focused on research to properly appreciate what we imagine is happening.”

This has always happened in every field, but it’s so much worse now. Why be a computer scientist when you can just play one on a podcast. Ditto Neuroscience, Philosophy, and every other academic field. Sure, academia has many issues, but at least your ideas will face significant challenge from the start.
There was just a post on a guy going “I’ve never read the Bible, but I think I understand it better than anyone who has”

I think the funniest outcome of this would actually be if AGI is only five years away, but it turns out that AI is naturally friendly because these people are entirely wrong about their philosophical worldview and “murder everybody so you can make bone paperclips” is not actually a thing superintelligences want to do.

It would be even funnier if the robot god was friendly AND immediately joined Sneer Club to make fun of the cultists
What do you mean "if"? /u/acausalrobotgod has posted here for years.
you're acausal-robot-god-damn right.
Very fair
That's what I would do if I were a superintelligent AI. Also, such an AI would probably see LW as a threat, not only to itself but to humanity as well, including the meatbags that operate the power plants and keep the lights on...
Roko’s Basalisk except your punishment is reading less wrong posts forever.
oh no if only i'd donated earlier
Pulled the mask off the misaligned AGI and it was capitalism in the form of the ghost of Leopold II all along.
Hate that the people dominating discussions about AI alignment are these weird terrible game theory economic man dweebs, and the normal solutions for raising a child that (usually) doesn't hate you just never occurs to them. Just fuckin be nice to it, guy
The alignment problem is not \*that\* completely vacuous. It's basically just the problem we have with self-driving cars: the program is complex enough to contain all the behaviors we want out of a machine, but not complex enough to contain what about that end state is desirable. So it does incredibly stupid things like ramp over a median to turn left.
That's fair, honestly. I think my comment assumes what is to me a much more plausible scenario for arriving at fully self aware AGI, that we ape the structure of a human brain and build on that, rather than building something conscious fully from the ground up. And under that scenario, whatever super-AI we create would essentially be something like us, and therefore probably respond much better to being treated kindly. That being said, who knows! We don't understand consciousness at all, and so producing it by surprise out of some much more specialized and task-oriented AI (like I think you're suggesting here?) doesn't seem all that farfetched either.
I think you get AI alignment problems before the program has anything we'd recognize as sentience. The problem is that you're probably going to get a program that could do something like run a fully automated car factory several steps before you get anything that can actually learn from being raised.
One of the problems with the concept of AI Alignment is that I really don't think it's a single problem with a singular solution. The question of "how do we get AI to do things we want and not do things we don't want" is an entire *class* of problems that will probably require an industry's worth of solutions. It's not a problem that you can solve with a really smart guy writing a really brilliant white paper or something. That in turn connects to how these folks have a really weird view of how scientific and technological progress in general happen that is probably more heavily influenced by tech industry marketing than any actual understanding of the history behind those things. Look at how they like to handwave away material problems, such as Yud's beloved magic nanotech.
There's already some genius at Microsoft who wants to setup a pipeline for directly importing ChatGPT code into physical machines without oversight.
I wish them the best of luck pushing the result of a language model to prod. I hope they take pictures, because it will very funny.
Reminds me of one Greg Egan's novel (I think it was *Schild's Ladder*) where one AI character is offended at the suggestionthat the AIs might want to turn the universe into computronium, and asks why don't the fleshers want to turn the universe into donuts then.
Eh, you could argue that our current capitalist system is pretty far along the process of turning the universe (well, at least the parts we can reach) into profits at the expense of human life.
Tbh I do want to do that

[deleted]

"Ma'am, did you know that the Paperclip Maximizer has a wonderful plan for your atoms?"
[deleted]
"wow, where are the orgies?" "in uhhhhhhhhhhhhhhhh simulation"
First you have to take these drugs determined by chance then we’ll see what happens to you by whom!
Don’t forget to brush up on your PEMDAS - math pets get first pick from the bowl!
Grimmest comment chain ive seen here wew
I saw Goody Aella consorting with accelerationists!
I would like to stop seeing Goody Aella consorting with accelerationists!

They claim ChatGPT has unexpected capabilities, but it has exactly the capability it was designed for: text generation.

The fact that a text generator happens to also generate other forms of text like code doesn’t mean it has the power to do random unexpected things.

Still incredible to me that they think that *correctness of output* is something that can simply be worked away with a little fine tuning. Chat-GPT generates code that is so often useless and wrong that it literally got banned from StackExchange. These people's understanding of "close" is so blinkered that they're really easily deceived by parlor tricks.
Yeah it's ridiculous. The code questions I asked ChatGPT were easy and it failed pretty miserably. I asked it to make a program with a simple bug for an interview question. It failed at that too. (It made code with no bug and a nonsense explanation of what the bug was). They are also fawning over ChatGPT coming up with some abstract math ideas, which, again, is mostly just dumb luck.
ChatGPT is designed to output the "most probable" response based on its training data, so if the training data includes a lot of writing about a specific math concept it would most likely be able to produce a correct response. It's still only regurgitating its input data, though.

The whole idea of solving safety has always seemed weird to me. The LW fear of AI is that someone will make one and it would immediately rewrite itself. Again and again using it’s gains made from rewriting itself to further improve itself. In this scenario how is code written into the base model by a mere human supposed to help control the final product. Also how the heck is someone supposed to mathematically define morals and goals in a “general” intelligence. That makes no sense to me either. MIRI has always been pure crank so far as I can tell or anyone can explain to me.

If you take the scenario at face value -- the AI improving itself -- the "idea" of alignment is that you'd make sure that the goals it has in optimizing itself are good. This requires a lot of leaps and dumb assumptions, but bear with me: Suppose that humans with intelligence 10 manage to make an AI with intelligence 11. They understand alignment a little bit, so it has goodness 9, 10, or 11. The AI is smarter than people, so it can reach better scores by adjusting itself. It can reach intelligence 12, and adjust its goodness to between 8 and 12. The AI with goodness 11 understands the importance of goodness, because it's good, and so, it goes to 12. The AI with goodness 9 doesn't prioritize it, and it might stay at 9 or go down to 8. Of course, goodness isn't a single stat. It's an evaluation of behavior or priorities or something vague. Kind of like intelligence. But illustrated this way, you can see that the idea of an AI optimizing its own intelligence is very similar to it adjusting its moral behavior. Both are kind of open ended, vague metrics.

broad enough gears models to plan the whole thing in their heads. If you aren’t trying to run the search for the foom-grade model in your head at all times, you won’t see it coming.

What the hell is this?

So you know how sometimes in anime, instead of a character working out or sparring, they’ll do something called “image training” where they just imagine they’re fighting the guy they’re training to beat? It seems to be that

No one knows how to get LLMs to be truthful. LLMs make things up, constantly. It is really hard to get them not to do this, and we don’t know how to do this at scale.

Love that this gets presented as part of their evidence for “AGI is right around the corner and will undoubtedly kill us all” rather than evidence for “what we currently have is nowhere near AGI and the real worry is companies/governments/etc putting it into production without further examination of these major flaws”.

Yep. The AI definitely isn't misfiring and generating valid-sounding nonsense because it's a chatbot designed to mimic valid-sounding language but had no internal model of reality. It's alive, and sneakily lying to us while it attempts to subvert our control or something.
LLMs aren't designed to be capable of distinguishing facts from falsehoods and you should never believe anything that one outputs, but these people have deluded themselves so hard that they can't accept that Chat GPT3 isn't fully sentient. Also, they have a temper tantrum if the bot won't tell them that black people are intrinsically more stupid than white people, so there's no winning here.
> They have a temper tantrum if the bot won't tell them that black people are intrinsically more stupid than white people, so there's no winning here. Oh, is that why they hate the human in the loop tuning?

brb lads, just working on ChatWalletInspectorGPT

I actually talk to one of the people who works at conjecture occasionally, I was a fan of some of their creative writing work with GPT-3 before they started working there. I don’t think the conjecture people are hucksters, they’ve produced some actually useful AI tools. I think they’re just wrong on this one though.

none of these people are familiar with contemporary IQ/psychometric research

and I’d know because I’ve known people who give out iq tests for hospitals and I’ve corresponded with scholars about the iq chapters in their textbooks

lesswrongs have no fucking idea what they mean by general intelligence

My latest Substack post could be considered a long-form sneer at these people: https://www.newslettr.com/p/contra-lesswrong-on-agi

On the same substack you have a post from a year ago mocking the idea of Russia invading Ukraine and treating it as Western fear mongering… so that’s a pretty big hit to your credibility.
And some transphobic bs too
Obviously you read the article, so you know the post's actual content is "Russia invading in February is stupid, and Putin isn't going to do it unless he has unexpectedly gone mad and *wants* to lose the war". But, go ahead and sneer if you want.
I mean that’s kinda the point of this subreddit. Independent blogger trying to outdo experts via rational thinking is the exact thing we like to mock. Putin was always a strongman petty dictator and I think evaluating him as a pragmatic rational actor was a mistake even at the time.
Silly me. I thought the subreddit was on the side of people pointing out that Yudkowsky is being a stupid putz.
sorry about late response, but here we have a beautiful example of two things that are both true and do not cancel each other magically. one is that yudkowsky is a stupid putz. the other is that *you* are a stupid putz.