r/SneerClub archives
newest
bestest
longest
83

Me and some other people were discussing things like global warming and what to do about it (world communism) when this person points out that hey, even catastrophic and irreversible climate change won’t actually kill the species. Which is probably true, but little consolation for the soon-to-be billions of climate refugees.

After this, this person also claimed that actually global thermonuclear war probably wouldn’t wipe out the species. While this is also maybe hopefully true, I wouldn’t take the worst case scenario that most people can imagine so lightly. So here is where I start to become suspicious. They then go on to say that they’re concerned about the long-term threats to the species and I’m hoping they’ll bring up the need to look out for killer asteroids or something. But no, it’s actually Evil AI that is the biggest threat to the species.

What follows is a discussion where I point out that 1) we don’t know what intelligence is 2) we don’t know if reality is computable 3) machines do not create value 4) all “AI” systems so far do little more than regression 5) autonomous systems that can misbehave already exist and 6) an evil self-replicating AI/egregore already exists and it is called capital

What I found really fascinating is this person’s view of “AI” is seemingly lifted from science fiction. When I point out that I know people in the field, and that all of them think these worries are silly, they counter by saying that the AI safety “field” disagrees. When I ask how any of the present “AI” systems are to reproduce themselves they don’t consider this much of a problem, despite the fact that all present AI systems need human minders. How exactly an unknown function approximator is going to take over the world is apparently not interesting to the people in this “field”.

Oh and somewhere in the middle of this my suspicions were confirmed by them saying they are indeed a member of lesswrong. Which certainly explains why the very concrete fears of climate disasters or thermonuclear war can be dismissed in favor of worrying about computers throwing numbers around.

If nuclear war and climate change aren’t potentially existential threats what other means would AI have to drive humanity into extinction?

Those seem like obvious choices.

Obviously it would use its Super Mind Genius Powers to find an even killier way of killing people. Like *double* nukes or *ultra* climate collapse.
>even killier way of killing people. Warhammer 40k Orks: *Alls ya need is a good choppa and a bit of the ol' WAAGGHHHH*
Oh yeah good point. But also this would be asking about concrete concerns which the AI safety people are loathe to do.
It's by no means a total refutation of the movement, but it always seems funny to me that they would dismiss those two scenarios as, ultimately, not of concern while also ignoring the idea that, if one wanted to destroy humanity both nuclear genocide and climate change would get you most of the way there, nevermind that they aren't even mutually exclusive scenarios.
Not to mention that humanity may just never recover again. We look back at history, we see a lot of very large wars, and you see a lot of nuclear explosions, but no nuclear wars (apart from a couple nukes at the end of a conventional war). That ought to give 'ya a pause when it comes to deeming a nuclear war just as survivable for humanity as a non nuclear one. You don't have to go full nuts simulation/doomsday argument here, the nuclear war not being survivable is more in line with, you know, not finding any nuked ruins of Atlantis or anything like that.
The idea is that it would outthink humanity so hard we die of narcissistic despair. More seriously they think AI will have other plans that wont include humans, and will do unto us as we done unto Passenger pigeon.
They... They want to eat us?
oh my goooooooooooooooooodddddd
Soylent Green. Is. People
[Your face when somebody drinks me](https://www.youtube.com/watch?v=HyophYBP_w4)
Yup.
Posting.

Ha, but the human race didn’t die out! I tell my 4 surviving kids (the other 10 died in childbirth) while we subsistence farm on the north pole.

And if anybody mentions the AI safety field (which also consists of two groups iirc, the one who goes ‘ow god this will just reinforce so much racism’ and the other who goes ‘PAPERCLIPS YOU DO NOT THINK IN ENOUGH DEPHT ABOUT FUTURE PAPERCLIPS!’. but clearly here it was just the latter) they either are a LW person or read the popular culture spinoffs from it. Next time, just say ‘well we can just put the AI in a box’ and see their eyes light up with smugness because you fell into the AI in a box experiment trap.

Could we trap Big Yud in a box?
Omega presents you with two boxes, one contains $1,000,000, another contains Yudkowsky, which one do you open
Open both, then close the one that has yud inside
I zero-box in this situation.
I flood the room with halon then wait an hour for it to clear and open both boxes
He will escape he is jus that smart. In fact just by looking at the inside of the box he will discover new laws of physics.

The annoying thing is that the threat of AI is a lot like the threat of climate change: rich people ignoring externalities in order to plunder and pass off the costs to others.

Obviously the way to mitigate this is give money to people talking about acausal robot gods and think the way through is a friendly singularity spanning the cosmos.
The difference is that global warming is a real threat and Skynet taking over everything is a sci-fi nerd thinking their concerns are Very Important.
Yes, the AI threat is just that it exacerbates capitalism: rich getting richer, ossifying existing power structures, etc etc. Not an existential threat.
it is weird when they get mad when I say I am more concerned about Global warming than AI. I mean it isn't weird, it is the expected outcome but it is infuriating.

My favorite LWer-in-the-wild story is the time I ran into someone who chastised me for not taking Roko’s basilisk seriously since, and I quote, “the smartest people in the world” think it’s the biggest issue facing humanity now and in the future.

Surely these smartypants must have figured out that there may as well exist an acausal anti-basilisk that will torture them indefinitely if they bring it into being. A bit like Homer's wager where a vengeful god punishes you for worshiping the Abrahamic one.
Even people on lesswrong didn't take that one seriously...

Rationalism is when I get my priors from an Isaac Asimov book.

The more I think about it the more I just cannot see how they get from Asimov to acausal robot gods rather than to "maybe uncertainty is a thing?" Like, the whole robots series is about how a supposedly-complete set of rules governing all circumstances to create a minimally-bad behavior set actually ends up screwing shit up either because there were circumstances not accounted for, because people (and robots) wanted something that the model didn't consider, or because someone messed up the program when it was inconvenient for the circumstances. Foundation, for all that psychohistory is absolute math nerd wet dream material, is always pretty clear that the results aren't meaningful at an individual level and the second book completely breaks the original plan because of this. I guess The Evitable Conflict and The Last Question do include the whole omnipotent machine but in the former everyone but the weirdo robot girl is pretty horrified by this turn of events and the latter ends by literally creating God. I mean, this is the same group that unironically named their VR vision the metaverse, so I get that actual reading comprehension of sci-fi isn't exactly their strong point but come on guys.
I think its actually rehashed Orions Arm tbh.
Rationalism is when I call "shit I pulled out my ass" priors
Achtually Asimov used robots in a different way, they were usually the downtrotten minority class. It is the adaptations of asimov which made it into a skynet threat.

Since the AI safety argument immediately pivots to “fund my friend’s AI-risk alignment non profit” I was always weirded out by it.

In the telling of it, well the way they used to, “AI x-risk is not being worked on” was the justification for the “E” of the EA part – your dollar is highly effective because no one is working on it!

Well shit, we’ve spent this money on the AI alignment problem and what do we have? Oh wait you need more time? Okay….

this might just be me personally but I come here to sneer at EY, assorted Scotts, and other dumbass thought leaders. sneering at some rando you met seems less fun to me

I met a LWer once, and this person was just strange in every way. A helluva writer and pleasant enough to be around but their sense of reality and concerns were just wildly off. They also made me hide all the toothpaste in the house because they were so sensitive to mint that they could, apparently, smell it in the tube.

Then I accidentally found out that another one of my then-friends was LW-adjacent (I think she had a friend or partner who was into it) and got to witness the power of the cult in full force. Basically I made some joke about Roko’s Basilisk and this chick absolutely lost it on me, refused to believe they were a cult or fashy or anything but lovely and harmless. It was my first insight into this person’s personality disorder and an incredibly jarring experience.

Me and some other people were discussing things like global warming and what to do about it (world communism)

as you do comrade

even catastrophic and irreversible climate change won’t actually kill the species.

Is false.

We’re on our way to destruction of the biosphere. Even if we’re not and we survive in some small way we have used all the easily (economically viable) resources we used to build our society.

There is no Earth 2. There is no Human civilization 2.0 on earth 1 either.

Humans are pretty resourceful. Even if we off 99.9% of the species I expect a couple hundred thousand individuals can eke out an existence near the poles. It's just that it's going to suck.
How the fuck are we going to fucking eat without a biosphere and 1000 ppm co2(WHICH WILL LITERALLY MAKE ALL OF US STUPID) dude. Every fucking time it's "humans are pretty resourceful" BUT THERE WON'T BE ANY RESOURCES.
I guess we'll have to walk around with CO2 scrubbers then. Mental problems don't start setting in until around 5000 ppm, and the carboniferous' levels were at most around 2000 ppm. 2000 ppm wouldn't be pleasant of course, akin to walking around in a stuffy room. There's always resources. Even in the Antarctic, especially if the ice caps melt. To say that there will be no biosphere would be incorrect. Mass extinction has happened before, and the biosphere is still around. It's just going to suck. Billions are going to die.
We will not have the infrastructure to build co2 scrubbers. It will be low tech farming forever. The easy to get at coal etc is also gone so no way to restart industrialization.
> I guess we'll have to walk around with CO2 scrubbers then. Literal science fiction insanity. We won't have a global industry to create these science fiction devices even. This thinking is literally just as bad as everything this sub is making fun of. CO2 Scrubbing requires EXTREME AMOUNTS OF ENERGY THAT WON'T BE AVAILABLE, they're not even available viably now. > Antarctic The antarctic won't exist if the ice melts. There is no big land mass under the ice. It's a bunch of disconnected islands which will be mostly under water. > Mass extinction has happened before, and the biosphere is still around. Incomparable to the speed at which we're fucking things up now. 2 degrees = 4 degrees = 6 degrees is everything will die except extremophiles at the bottom of the sea at vulcanic plumes. > It's just going to suck. Billions are going to die. NO, EVERYONE WILL DIE. Your optimism is literally the reason we're doing nothing. You have no idea about how societies work. You can't have economics of scale without billions of people. You can't have scientific progress without billion people industry anymore. YOU CAN'T JUST MOVE THE WORLD SOCIETY TO A BUNCH OF INFERTILE ISLANDS. There won't be any fucking food. https://www.youtube.com/watch?v=5WPB2u8EzL8
>https://www.youtube.com/watch?v=5WPB2u8EzL8 >exergy >EROI Looks like I'm in for an hour of reactionary nonsense. >The only way to make the economy use less energy is to unwind its complexity, and this is impossible to do in a deliberate fashion. My man here has not heard of planning. Indeed money is fake, but value is not. There are many technical problems in the talk which are amply solvable with current technology, we merely need to deploy it. The way to do that is, as I've already said, planning. Doomerist hysterics does not accomplish that.

world communism

Stopped there.

Same, I can only get so aroused.
Yeah, I’m a traditional liberal cuck. Sincere belief that the one, correct political system magically solves some big, extant problem is naïve, at best, and tends to become genocidal at worst. This goes for liberal democracy too. People thought it was written into the laws of history which was soon about to end and here we are in 2022.
The point is that the market is incapable of dealing with "externalities" (or actively harmful) and the only hope for a sustainable economy is global economic planning. There is a rich body of work in this area starting with Otto Neurath. The most recent work is [Economic Planning in an Age of Climate Crisis](https://www.amazon.co.uk/ECONOMIC-PLANNING-AGE-CLIMATE-CRISIS/dp/B0BKHZMVQC) by Cockshott, Cottrell and Dapprich. You can choose to put whatever political system you want on top, but likely it will need to be radically democratic, far moreso than the despotic capitalist firms that most labour is done in at present. These kinds of discussions get extra fun with rationalists because they don't grasp that we already have an evil self-replicating AI that is destroying the species and it is called capital.
>The point is that the market is incapable of dealing with "externalities" (or actively harmful) and the only hope for a sustainable economy is global economic planning. Have you ever read economics that isn't pseudoscience? Economists are well aware of externalities. To think that the solution to an evidence-based sustainable capitalist liberal government, is *an absolutely genocidal dehumanizing* order that has failed dozens of times and has been thoroughly debunked by economists of every respectable school, demonstrates that you simply want your terminally online fantasies to be forced onto everyone.
Marxism is the only economic theory that has the ability to actually be a [science](https://www.marxists.org/archive/marx/works/1880/soc-utop/index.htm). Neoclassical economics is unfalsifiable nonsense. For example it posits that as you produce more of something the cost per unit rises (upward-sloping supply curve) whereas even a child can understand economics of scale. Moreover its vaunted supply-demand curves need at least four coefficients to fit two only observables, hence unfalsifiability. >an absolutely genocidal dehumanizing Nazi Germany and the Br\*tish Empire were/are both capitalist. The USSR in particular *stopped* at least two genocides (Generalplan Ost and the Holocaust). Your going on about muh respectability is also hilarious, these respectable schools being responsible for the soon-to-be biggest genocide that human history has ever seen. Billions of people are going to die if we let the bourgeois and their liberal apologists keep running amok.
>the only hope ... is global economic planning I mean, *really*? As if no one else has ever thought of externalities before such that the choice must come down to full-on “world communism” or Milton Friedman’s wet dream. The lack of any humility before your singular ideology purported to solve all problems reminds me *so* of the types people come here to sneer at.
Have you read Marx? Without understanding the problems of capitalism I don't think we'll be able to solve them. It doesn't just come down to 'externalities'.
It's science. It's not that no one can think of externalities, it's that capital doesn't give a shit. Go look at any Western country. Can their politicians actually decide what levels to keep the atmospheric composition within? No they cannot. Do they have any long-term environmental plans? Of course not, that would be tantamount to heresy against capital.
Same. Please, guys -- don't just *assume* everyone here is a leftist. Not everyone is enthusiastically bobbing their heads at the casual mention that world communism will solve global warming. I mean, you *must* see how, to some people, that sounds almost as naive and dogmatic as anything else you'd read from the rationalist community. On some other corner of the internet I'm sure someone is out there furiously typing out "I met an actual communist the other day, and can you believe it, this person actually said 'machines don't create value'" and probably getting one or two decent shots of their own at you.
The difference is that Communists have a large body of political theory, many political parties, and even run some countries. Rationalists have a few blogs. If we're considering who will have more impact on the future direction of human society, then the Communists appear to be far more influential.
I can think of a few other ideologies that fit that description which would have elicited unanimous sneering. Niave reductionism, as in the OP, (which is something I'm guilty of myself) can be just as sneer-worthy.
Pretending all ideologies are equally sneer-worthy is your own reductionism. The only naivety here is is an unawareness of the political and economic problems caused or exacerbated by capitalism.
To be fair, normal people (i.e. people who don't support communism) aren't terminally online enough to post paragraphs of fake fantasy stories to "own the libs", like most of this sub and this comment section.

World communism? Haha fucking gross dude. Please leave me out of that shit.

Dont worry you will get your own non communism place in world communism. Hope you like Siberia comrade. Our great leader, acausalmarxbro, has thought of everything l.

we don’t know if reality is computable

Why wouldn’t it be?

It could be, but we don't actually know. See the Church-Turing-Deutsch principle. We can perform quite accurate simulations, but if reality is continuous (which so far it seems to be) then digital computers won't cut it. Deutsch thinks quantum computers may save the day, but that presumes quantum supremacy is possible, which we don't know. I suspect QS is impossible.
Spectral gaps for lattice models can be undecidable in their continuum limit, and its still an open question if the spectral gap of Yang-Mills is deciable or not, so independent of quantum supremacy its still very unclear whether or not the standard model is computable. Of course string theorists and loop quantum gravity people would argue that the Berkenstein bound means that at the regime where quantum gravity becomes important, this takes care of everything and makes things computable. But of course they still have the problem of actually writing down an explicit, well defined model in the case of string theory, or actually attaining realistic continuum limits in the case of LQC. So TL;DR, regardless of quantum supremacy, there are hand wavy arguments to be made either way.
What do you make of the possibility of QS given that drift is a thing? It is my understanding that QS is impossible because there is a finite number of times we can perform any quantum computation before drift cocks it up. As in your quantum computer may actually be doing its thing correctly but you can't actually measure its outcome because at some point your averaging of results stops working because drift's spectral density is pink.
The bigger question is why is this relevant for AI? We already know that we can compute useful approximations of reality, and if an AI is truly "super-intelligent" it would presumably be able to do that too

machines do not create value

What?

It's a Marxist claim related to the labour theory of value: that machines do not make anything, they can only increase the productive power of labour to make value.
To be even more precise, machines can only produce use-values. Machines can only perform concrete labour, not abstract labour. Another way, admittedly tautological, to say this is that machines do not participate in human society because machines are not human.
How is this relevant to concerns about AI?
Every decade or so the capitalist class believes they have come up with some innovation that can rid them of the need to hire workers. AI is merely the latest in a long line of such innovations.
And does using this definition of value and then asserting that AIs cannot create value under this definition somehow disprove that AI will be able to rid the capitalist class of the need to hire workers?
Fancy regression is not going to get rid of the need for people in the economy, especially an economy that runs on the profit motive. All automation can ever do is change what the actual jobs are. You can't sell goods and services to robots, only to people, and those people need incomes. And so on and so on..