r/SneerClub archives
newest
bestest
longest
Here's a long article about AI doomerism, want to know your guy's thoughts. (https://sarahconstantin.substack.com/p/why-i-am-not-an-ai-doomer)
17

Very long post that makes the extremely controversial point, in lesswrong circles, that doing well on standardized tests does not make one intelligent.

I did great on standardized tests and I'm a dumbass, so that checks out

specifically it’s an article from a long-term rationalist. This is arguing against details of the theology, not the broader paradigm. She buys Yudkowsky’s schtick, with only the exception that she doesn’t buy “Current machine learning developments are progressing rapidly towards an AGI.” Note how all the text argues in the weirdly disconnected rationalist manner.

A “still removed FOOMist” rather than an “imminent FOOMist” Telefoomism vs immifoomism? Pre-foomer vs Now-foomer? Looking for nicely sneery categories.
Foomism is a spectrum not a binary. ;)
What about post-FOOMists who think we are already captured and simulated by the basilisk \^\^
They get to be outside of the spectrum as a treat.
Reformed and Orthodox
i can still never see it as other than "Friends Of Ol' Marvel"

So if you become a rationalist, are you required to stop editing what you’re writing from length and clarity?

they can't all be moldbug or scooter, but by god they're gonna give it a go
More words == more smarter, duh

Hey is it just me or are all of these people just fucking about?

Like, obviously homegirl is well-spoken, and basically coherent, which is twice what I can say for Mr. Yudkowsky*, but like, have none of them ever read Hofstadter? Or maybe even better/worse, Asimov? The latter being interesting precisely because his books had absolutely nothing to do with the alleged functioning of a “positronic brain”. It is good, and smart, and hard-sci-fi, because of this omission.

These are not thought experiments, but the illusion of such, trying to hit a limit that does not exist: wHaT iF gOd mAdE a RoCk sO bIg…..it just goes around and around, week after week, with new words and catchprases to describe the “human brain” that “computers” will literally never have. Hey, what if they did though? That’s just makebelieve, not a “thought experiment”.

*: Just realized the special relationship between non-HS-graduate autodidacts and LLM-powered aGi, LMAOOOOOOOOO

We have machines now that can beat the world chess champion, generate elaborate detailed works of art, and hold cogent thoughtful conversations in plain English. This would have sounded like sci-fi just 20-30 years ago. I don't understand how you can still be so pessimistic about what this technology might be capable of in the future. Of course, none of these machines think at all like people do, but that was never the goal of "AI" research. The goal was to make programs that can do anything people can do at least as well as people can do it, and they've been enormously successful at this so far. We have absolutely every reason to take AI ethics and safety seriously right now. Of course, I don't agree with Yudkowsky that we are one small breakthrough away from building a malevolent machine god, but I think you're pushing way too hard in the other direction.
Deep Blue beat Kasparov in 1997—26 years ago. It did not sound like science fiction then, and the technology has not advanced nearly as far as you think since.
I know all this. I just think things like ChatGPT and Midjourney are clearly incredibly impressive pieces of software, especially when you compare them to their predecessors just a few decades ago. It seems imminently plausible to me that even more impressive things may be possible in the next 30 years. I can't predict the future, but this AI skepticism seems absolutely naïve to me.
> things like ChatGPT and Midjourney are clearly incredibly impressive pieces of software This is absolutely true. But they (100% objectively) aren't intelligent, and (in my subjective but strong and well-founded opinion) do not represent meaningful steps toward artificial general intelligence, which remains, for better or worse, a complete pipe dream.
I feel like people are interpreting a bunch of things into my words that I was not trying to say. I'm not claiming that we are approaching "artificial general intelligence" and I've been repeatedly saying that I do not think the software is "intelligent" in the way people are. But it does look plausible that we are within a few decades of developing combinations of general purpose and specialized software that can effectively replace a lot of jobs that don't involve manual labor.
You did say that machines can "generate elaborate detailed works of art" and "hold cogent thoughtful conversations in plain English"; neither of those things is true, and both imply intelligence. Machines can generate elaborate images that superficially *resemble* detailed works of art, but are not in any meaningful sense detailed (whether they're art is, I guess, in the eye of the beholder; to me they're obviously not). They can crudely simulate conversations, but those conversations are in the most literal senses neither cogent nor thoughtful. They don't even *appear* cogent if you let them run long enough or apply enough pressure to them. > anything people can do at least as well as people can do it, and they've been enormously successful at this so far And I completely disagree with this. Machines can, as ever, help humans do things we weren't built for (breathe underwater), or things we've never been able to do very swiftly (dig holes), or things we generally don't do accurately/reliably/efficiently (math), but independently equaling or surpassing *anything people can do*? Pretty much all they've mastered so far are a few games. > We have absolutely every reason to take AI ethics and safety seriously right now This I do agree with, but unfortunately "AI ethics and safety" mean very different things to different people. A lot of the money, thought, and attention is going to embarrassingly stupid ends. > we are within a few decades of developing combinations of general purpose and specialized software that can effectively replace a lot of jobs that don't involve manual labor I take your point here, too, I just don't think it's anything novel. Steam shovels didn't eliminate the work of excavation, but they did allow one guy to do with a big machine what several dozen people with spades had been needed for in the past. Software will allow one copywriter, or one editor, or (God help us) one journalist to do the work a dozen do today. It will be bad, yes, but not because of the nature of the technology, just because we live under an economic system that is entirely indifferent (even, ultimately, to its own detriment) to human life and well-being.
I think it's pretty hard to deny that ChatGPT can hold a cogent conversation on novel topics. You can prompt it in adversarial ways to get it to produce nonsense and it starts to break down as you reach the limit of the context window, but it can often produce very humanlike text. And I've seen AI "art" that I think is pretty impressive. I don't think this implies human "intelligence" at all, it's a ultimately a stochastic magic trick, but it is demonstrably a pretty effective trick. It just seems like you're constantly moving the goalpost. If we had been having this conversation a few years ago you'd be claiming that no bot could pass the bar exam or convincingly imitate Rembrandt. Now that they can do those things, you're splitting hairs on whether it really counts and myopically focusing on the weaknesses of the software. It is very possible we hit a dead end with this kind of research, but this looks to me like a battle you are destined to lose. At the very least, it does not seem unreasonable to suppose that the software might get even better in the next few decades and these weaknesses may start to disappear.
ChatGPT cannot *hold a conversation* any more than a ventriloquist’s dummy can.
I feel like this is pedantry. When I say it can "hold a conversation", I mean it can stochastically "simulate" a convincing approximation of a short conversation with a real person. I don't think this is much different from how someone might say they "saw an explosion" in a video game even though they were really just watching a bunch of pixels on a computer screen algorithmically arranged to convincingly portray an explosion.
You are missing the point. It doesn’t matter how convincing the simulation is or isn’t. Either way, *there’s nothing there*—no mind, no motive. Playing an extremely immersive game or watching an extraordinary well-acted play can be transportive, can make you forget about reality for a few hours, but it does not transform reality outside your subjective experience. Being afraid of AI is like being scared of the monster in a horror movie (or, maybe more aptly, spooked by your own reflection in the mirror). There are, as I said, good reasons to be concerned about the technology, particularly the potential for disinformation and fraud (deepfakes, counterfeiting, etc.). It will be abused, as many technologies before it have been. But the whole “x-risk” argument is risible.
I feel like you're not reading what I'm saying. I've been repeatedly arguing against "x-risk" claims and saying that I do not think the software is sentient. I've never claimed that it has a "mind" or "motive".
What *are* you saying? You keep pulling the ol’ motte and bailey—talking about how machines can make incredible art and hold cogent conversations, then falling back on, “Well, no, they can’t literally do those things, but that’s not what I really meant and you’re being pedantic.” Why are skepticism and pessimism about “AI” unwarranted? In what way are the hype about LLMs and similar generative models and the panic about what technology might follow not completely overblown?
I think the analogy to a video game explosion is appropriate. It isn't literally an explosion, but it can be a convincing enough simulation of an explosion and that is ultimately all that really matters. ChatGPT isn't literally holding a conversation and thinking like a person, but it can convincingly simulate that to some degree. But this distinction is pedantic and it would be silly to insist that someone not refer to a simulated explosion just as an "explosion". The reason I think the pessimism is unwarranted is totally qualitative. I remember the old-school chatbots from the 90s and early 2000s which would repeat themselves a lot and were often incoherent. The difference between that and ChatGPT is dramatic and stunning in my opinion. I wouldn't be buying into the hype if I hadn't interacted with the software myself and seen what it is capable of. ChatGPT still has many flaws, yes, but if 20 years of research could make that big of a difference then surely it isn't unreasonable to think that chatbots 20 years from now could be even more impressive and humanlike. To be honest, I have no idea where this technology could be headed and I don't think it's implausible that we hit a wall and it ceases to improve for a long time. However, I also don't think it is inherently crazy to believe that the technology could continue to improve and something that convincingly simulates humans in all relevant ways may not be that far off. Of course, I don't think this would herald the coming of the machine god, but it would be a very big deal for obvious reasons.
Right, so, imagine how silly that person would sound insisting that there was ***an actual explosion in the TV***...
Sure, but the fact that it isn't an actual explosion doesn't inherently prevent it from really convincingly looking like an explosion and it doesn't mean that it would be inappropriate to talk about it as if it were an actual explosion.
First, yes, machines can win at chess, but the other two are debatable and mostly subjective. However, people were absolutely predicting these capabilities from the beginning—like 70+ years ago. Programmers were working on these things in the 70s and 80s (40-50 years). It’s not magic or intelligence. People were designing divination games in the Middle Ages that gave the illusion of communication for Christ’s sake. And what do those thing have to do with what they were saying anyway?
Yes, programmers were working on these things in the 70s and 80s, but nothing like this actually existed outside of fiction until relatively recently. I think it is pretty hard to deny that modern art generators and chatbots are a substantial technological achievement. They don't think like people do, they aren't "sentient" or "intelligent" in the same way we are, but no reasonable person is claiming this and it is beside the point anyway. The concern about this sort of software is not that we are on the road to designing an artificial human mind, but that it can autonomously get things done very competently and still operate in a very bizarre or inhuman way. And I think this is a reasonable thing to worry about as people become more dependent on machines day-to-day (among all the other "ordinary" concerns in AI ethics). These religious-esque proclamations about "superintelligent AI" trying to wipe out humanity are muddying the waters here a lot though.
Pcg, markov chains, and chatbots have existed for quite a long time. The flaws of those still apply to the newer systems. So while the tech behind it might be radical, I think the (good) applications will not be (due to it being a fad, and we not being able to learn about hype, it will be crammed into everything which will be an ethical/financial nightmare however). Just further incremental enshittification. There is a surprising amount of stuff in the real world not automated for very good reasons. (I have made rate of automation assumption mistakes myself in the past). E: interesting story I heard from somebody active in the roguelikedevelopment discord. Apparently quite a few people are using chatgpt to develop their roguelikes, but most of them are using it to quickly generate content which they then pick and choose from (I assume like basically a low grade fantasy/science fiction writer), and only one person is trying to integrate chatgpt into the game itself and is having quite a few problems. (The latter is what I would expect with the black box nature of chatgpt, [see also how I won YudGPT](https://prnt.sc/exFr-RD8GvSV)).
Or what, ***it'll beat me at chess????? Paint a better picture???*** These people should be sneered at. Not indulged.
This software could eliminate people's jobs, generate or perpetuate misinformation or hate speech, or malfunction in unexpected and dangerous ways when acting autonomously in a position of power. These are the sorts of ethical concerns I'm talking about, not the AI god. And, when addressing these concerns, it is perfectly reasonable to consider hypotheticals and thought experiments about what things this kind of software might be capable of doing in the future.
The problem with Eliezer doomerism (and part of why it needs hard pushback), is that it is ignoring these real issues, and in some cases, his proposed countermeasures to doom scenarios would exacerbate the real issues. For example, locking down all LLM development behind additional security and making it all closed-source would reduce transparency making it harder to address algorithmic bias.
Sadly most of your 'coulds' are already happening. (Have not seen the last one yet) Related, wonder what they are going to train the systems with after the stackexchange mods quit. Https://openletter.mousetail.nl/
Nobody gives a fuck about misinformation or hate speech in the entire history of the planet. Thats not going to change. There is no such thing as "taking jobs" either, more pretend. You don't need AI to do whats been going on already again since the dawn of time. Did the tractors not take jobs from the farmhands? It's all handwringing. Also, whats this "autonomously in positions of power", again, thats not a thought experiment, it's make-believe.
Software malfunctions have [literally killed](https://en.wikipedia.org/wiki/Therac-25) people before. And this is on a very small scale with software that is not trusted to handle very much. The difference between this software and a tractor is that it is designed to operate autonomously without any human input. Though, I don't expect jobs involving manual labor to be automated anytime soon, there already are lots of people that could plausibly be replaced by these AI tools. IBM announced that they may soon start laying off people, and I personally know artists who are very concerned about their job security. Lastly, I think it should be intuitively obvious that tools which can quickly procedurally generate photorealistic fake pictures or audio could be abused to cause harm.
I think what the person above you is getting at is that *AI* isn’t taking your job so much as *companies* will attempt to replace workers with AI. The rationalist discourse on this topic emphasizes AI as unstoppable subject/agent, as opposed to the actual corporate subjects/agents developing it and deploying it. This makes accountability impossible, which coincidentally is exactly what benefits Google, OpenAI, etc. To the extent that the AI apocalypse is something to worry about, it’s going to be a product of the same *human* misalignment that has always accompanied technological advances. When you say “we need to mitigate risk,” the risk will always be there so long as profit drives tech, as this is precisely what generates the tech misalignment. See: social media content algorithms. Yud et. al are unwittingly doing the Devil’s bidding in all this by insisting on closed source models to stunt autonomous AI’s inevitable growth.
[deleted]
Do you blame a hammer manufacturer when someone kills someone else with a hammer?
Whoever designed/implemented that software killed those people. Your caveats there, well, "not handle very much" except injecting humans with fatal amounts of chemicals? Ok. Artists with job security? Ok. The ethics on causing harm are clear: its harmful, this harm causing. AI changes nothing about this.
I don't care who you blame for the harm, the fact still stands that this new technology can cause harm. This is something that people can and should think about and take steps to mitigate. I don't think this is an unreasonable thing to ask for and I don't understand where exactly you disagree with me.
I don't know if we disagree, this just all sounds like nonsense to me. A toothbrush can cause harm. Thats a silly line, should we be concerned about the new Colgate technology?
Well, companies that design and manufacture toothbrushes should be concerned about the safety of their products. But the difference is in terms of scale. We should also obviously be more concerned about gun safety than toothbrush safety, for example, because guns are capable of causing way more harm.
Everyone uses a toothbrush, note. Everyone crying about AI just sounds ridiculous, thats all I'm saying. They're either children or should have been, and currently be, concerned with actual things killing/harming actual humans. But that's not as fun or something, idk.

Unfortunately I don’t have the time and probably not quite the skill, but I would really, really like to see someone with a firm understanding of ordinary language philosophy (plugging /r/ordinarylanguagephil - I know some users are on both subs) have a stab at deconstructing some of the things these people say - even the seemingly reasonable ones like this.

The talk of ‘creating minds’ and ‘possessing world models’ for example strikes me as confused and I do wonder how people are being mislead. Minds are not things, and surely human beings ‘possess’ absolutely no ‘world models’.

"World model" is kind of a dumb term that they really overuse - it's like the "holy ghost" of rationalism - but it gets at something real. Humans have ideas about how the world is that are learned partly from experience, and different humans have different such ideas, and so that's the sense in which they have a "world model". What this person gets wrong is that there's no real difference between a "world model" and just, like, abstract information about the world that you make use of. For example she cites an ant navigating a beach as an example of something that's somewhat intelligent but which clearly has no "world model", but that's wrong - the hard-coded behavior of her hypothetical ant implicitly includes a "world model" because it makes assumptions about how the world is, and those assumptions are what make it somewhat successful in navigation.

I’m not mathematically inclined enough to understand what she’s getting at 😅

So Eliezer has hyped up LLMs in order to sell his doomerism, and this article goes against that hype by going into tedious detail explaining what LLMs are actually missing in terms of reasoning and world models (spoiler alert: they are missing a lot). Of course, if you actually interacted with ChatGPT and aren’t a doomer primed to interpret every lucky response as evidence that LLMs being borderline AGI, these points should be obvious to you: ChatGPT fails at math, fails at common sense reasoning, and in general is missing huge chunks of stuff that is basic and obvious to a human and should be to anything deserving the “I” in AGI.
Pretty basic math so far, fam: >My position is that claims 1, 2, and 3 are true, and 4 is false. That's a 75% Yudkowsky agreement, which is about 75% too much.
3 might be mostly true as a consequence of 1 being mostly false!

God this thing is long. The author seems enjoy taking a long time to arrive at pretty surface-level points. Funny that one of the commenters called it “super info-dense.”

She starts by talking at length about agency, which she incorrectly defines as “pursuing goals” (having agency just means that you can perceive and change your environment, a.k.a. you exist and are not dead).

In one sense, every machine learning model has a “goal” – to minimize its training loss.

Equivocates the goal of the model with the goal of the training process. Which would be a pretty minor infraction (I’ve used this kind of wording before too), but she tries to run with this point a bit:

Does this satisfy James’ criterion of “fixed aim, varying means”? Is the LLM’s “goal” the same sort of thing as a frog’s “goal” to escape the water to get a breath of air?
Not quite, I would say.
The LLM relentlessly minimizes its loss function, no matter what the outcome. As far as it’s concerned, “winning” simply is making the number go down.
A frog, on the other hand, has something in the world that it wants (to breathe, so it can survive). The reality of whether the frog gets enough air to breathe is different from the specification of however its brain and body internally represents the goal of “get out of the water and breathe”.

which is an issue because now we’re comparing the goals of something that really has agency (a frog in the world) with the training process of a model, which says nothing about the “goals” that a trained model might have.

Also as a subclaim, she tries very hard to explain how humans have agency (goals) and can’t believe that some people (linking this) believe otherwise. Quoted from the link:

Since you have no fixed purpose, conformity is out of the question.
You participate whole-heartedly in inseparable nebulosity and pattern.
you do not have an “objective function”
you do not have any “terminal goal”

these are all malign rationalist myths
they make you miserable when you take them seriously

I’m not sure what part of this the author even disagrees with - she later goes on to agree with someone else who says that us humans don’t have fixed goals, we’re able to revise them.

Agency in the way that organisms do it involves a fixed aim in the world, and varying means including the ability to vary the mental specification of that aim.

…okay, not really a fixed aim.

Anyway, at this point I’m too bored with it to keep going. The post is full of italics for emphasis and jargon that she doesn’t even use right.