r/SneerClub archives
newest
bestest
longest
Reminder that 80000 Hours thinks 'dangerous advances in machine learning' are a bigger problem than climate change (https://80000hours.org/problem-profiles/climate-change/#other-existential-risks-seem-considerably-greater)
69

Gentlefolk, you can’t create sneerworthy content here in the comments this is sneerclub.

Mfking climate change deniers.

E: and before people go dismissing climate change here in the thread, please check if your argument is on this list, because ow boy, it is a bit sad if you come walking in with one of those arguments in [current year]. It is esp sad if you pick one of the top 10 arguments. Or worse… number 1.

[deleted]
Let me tell you about the real threat to humanity. Time Snakes. What are Time Snakes? They're a bio-engineered hyper-being, formed through the multi-mind wave-lashing of thoughts. Not a big problem, because none of those words make sense right? No, Long term, Time Snakes are the biggest possible risk. Not just existential, but multi-existential. It's basically A.I^2 in terms of how bad they are, because Time Snakes can move both into the past, and into alternate dimensions. As such, a "TS Event" will destroy not only future life, but also present lives, past lives, and lives that never were. We shouldn't really worry about Global Warming. I mean, sure, it's bad, but some cannibal raiders will survive by roaming the wasteland. A.I. is clearly worse, because an A.I. would turn the entire universe into paperclips. It's true, I saw it in a clicker game once. So yeah, it deserves far more funding than Global Warming. Time Snakes are even worse, because it kills not just everyone who will be, but everyone who has been. Any dollar used on A.I. research (or worse, Global Warming) is, in essence, a dollar spent on the murder of Roman legionnaires! PS: you can donate to my Time Snake foundation through my onlyfans. Or they'll acausally kill your grandfather before you were born.
> Let me tell you about the real threat to humanity. >Time Snakes. Oh those guys are the worst, they killed me thirty times yesterday.
Scam username
This whole argumentation has bigger holes in it than the ozone layer. But whatever.
But Pascal, what if I don't want to go to church and say the prayers and sing the hymns?
You're writing fanfiction, stop.
If self aware A.I. (is that the type that is supposed to threaten humanity?) is even possible it's not clear why any "rationalist" would think humans are an intelligent enough species to develop such an invention. Nothing about what we know of evolution implies infinite human mental capacity. Claiming we will invent such a thing is wishful thinking at best and narcissism at worst. In real life scientists have not the slightest clue how to develop self aware A.I. yet here you are pulling projections of when we will know the (possibly) unknowable out of your butt. Then there's global warming, which is, like, an actual, real thing.
[deleted]
Malicious humans already have the technology to destroy our species if they wanted to so I don't know why AI is a concern specifically? On conscousness, we don't know why humans are conscous so how can there be a "stance" on whether the scientific community can build a conscous machine? Maybe you can explain to me how someone can have a stance on when they will invent something they don't know how to invent? Humans obtained conscousness through random luck... but how does that imply we obtained enough intelligence through random luck to invent a conscouss A.I? We don't even understand our own bodies enough to tweak them to cure death, dementia or aging, but you think we can build a better life form from scratch?
[deleted]
It's not even necessarily possible to non invasively examine how the human brain works in detail. This article seems a decent primer on the problem: https://www.nature.com/articles/d41586-019-02209-z

probably common knowledge but i’m in a heatwave rn and it certainly puts me in the mood for sneering

The whole exxon leaks last week also didnt help with climate change related moods.

Wouldn’t it be fucking amazing, just absolutely some real Wachowski shit, if instead of being stuck in quicksand gazing helplessly into the gaping maw of the consequences of the choices that human beings a a species, we could do some kind of digital karate and bravely assert human dominance over humans’ environment once more? Wouldn’t that be fucking rad?

Because that’s what this shit is, no?

I am not ever, ever, having kids, not because I’m an antinatalist or anything - more my own life choices than anything else - but a friend of mine found out today that she’s pregnant, and is humming and hawing over whether she’ll keep it. She’s relatively young, roughly my age, and I can not for the life of me understand how somebody who is essentially powerless in a very fucked up world would want to bring an even more powerless thing into that ever more fucked up world: but she’s also bought (hook, line, sinker) into this kind of rationalist stuff, so after years of reading this crap and talking to these crappists I suppose I kind of get how you can delude yourself so thoroughly into this sci-fi planet idea - which sci-fi planet doesn’t exist - and think you’re…

[here, abruptly, endeth the rant]

Lmao saying this while living in the best possible period of human history by literally any statistic. Doomers are so much more sadder and pathetic than rationalists.
I’m 27, not a “doomer”, as stupid as that phrase is Get a life
>and I can not for the life of me understand how somebody who is essentially powerless in a very fucked up world would want to bring an even more powerless thing into that ever more fucked up world > Not a doomer Lol ok buddy I think /r/collapse is missing you.
No, I’m much smarter than that

I mean, not what the blog intends, but I absolutely think when comparing specific powerful assholes ruining the planet to letting those same people have the tools to unilaterally undemocratically do what they want forever, the balance comes out on preventing the latter being more important.

Mostly because it will lead to climate change, but also all the other terrible things.

I mean, is that really that sneerworthy? I also think nuclear weapons are a bigger problem than climate change. Doesn’t mean we shouldn’t do our best to stop climate change.

Assuming you mean nuclear weapons causing mutual assured destruction bombings, I dont think so. The risk of catastrophic climate change (which is already happening, and we are not getting off the ramp) is bigger than mad (as there dont seem to be ideological superpowers opossing each other with their arsenal at hair triggers anymore).
I think that's more people have gotten used to them rather than them being objectively less dangerous.
This doesnt seem to be a reaction to what I said. Yes MAD is objectively dangerous (???), but the risk of anyone actually starting MAD or restarting the nuclear arms race seems slim. And apart from that it doesnt have the same existential risk.
[deleted]
Iirc there is always a human in the loop.
>I mean, is that really that sneerworthy? yes

I mean, if you think AI has a decent chance of literally just killing everyone on Earth in the next 50 years, that would be a bigger problem than climate change, wouldn’t it? Doesn’t mean climate change isn’t a huge problem.

If anyone's so eager to believe in fantasy worlds where evil robots are poised to take over, their ideas of what's important are worthless.
I mean, if we take Loveercraft, decide to spend thousand of hours conjecturing on how it might not be fiction but a scientific study, and then buy into our own delirium, then we should conclude that the return of the Great Old Ones is more dangerous than *both* climate change and rogue AIs. The point is, why should anybody approach the world in that way?
My point is that if you know that someone thinks AI has a decent chance of killing everyone in the next few decades, then the fact that they think this is a bigger problem than climate change should be very obvious, and not worth making a post about.
>if you know that someone thinks AI has a decent chance of killing everyone in the next few decades That's exactly what the post was about, in fact their Frankestein complex is a gift that keeps giving for sneers. Like, thank you, we knew ourselves that their conclusions are somehow consistent with their assumptions. But is exactly the assumptions that are fun to mock

[deleted]

[removed]
[deleted]
[removed]
[deleted]
[removed]
'A few thousands himans will survive farming crops on the polecaps, the planet will be fine' is so crazy a stance to take for people who worry about human flourishing. (And remember even if after a thousand years the temp goes down, the coal and oil and other resoruces we burned is gone forever, there will be no second chance if you want humanity to become an interplanetary species An agrarian society cant bootstrap themselves into interplanetary fusion society without the correct resource base. I would prefer if humanity doesnt get stuck in subsistence farming forever (which would solve the fermi paradox however, after a climate disaster all advanced societies retract and get stuck))
[deleted]
You do know that humans are a social species, right? And that cooperation is a key factor in human evolution?
[deleted]
War is cooperative violence, without the cooperation it's just two guys fighting.
[deleted]
But the fittest don't survive battlefields the way they do one on one fights, it's usually about how close to the front or senior in rank you are.
War is an amazing example of where survival of the fittest, as meant by social darwinists, breaks down because of its cooperatve nature.
Yeah it doesn't matter how fit you are if your assigned role in the cooperative is "cannon fodder". And you can be unfit as you like as long as you're lucky enough to be on the winning team.
‘Life on earth including humans will be just fine’ There’s literally an ongoing mass extinction due to climate change. And you’re worried about a hypothetical computer.
[deleted]
Edit: wait, I sidnt want to discuss all this shit nevermind.
There is a very strong survivor bias at play here, just because that asteroid didn't wipe everything out doesn't mean a similar event couldn't. We are only here to talk about it because it happened the way it did and that is not proof that it couldn't have happened differently.
Im sorry you are not worried enough about nanomachines turning us all into grey goo. Just look at how much 3d printers have improved the past decade!
Wait, my 3d printer runs on grey goo, oh my god!
Dont worry, your worries will soon be over, just touch the forbidden goo.
The threat of artificial intelligence is completely hypothetical. Are you saying something more nuanced than "We should think of the Terminator movies as nonfiction?"
He takes MBTI types seriously, so I don't think it could be more nuanced than that. ISTPs are like cats, and AIs are like The Matrix. Such rationalism.
> The threat of artificial intelligence is completely hypothetical. Okay, and a hundred years ago the threat of nuclear weapons was completely hypothetical. What's your point? That we shouldn't try to predict future threats, and prepare for them?
yes, i do think it would probably have been tactically bad to plan around a nuclear weapon in world war one
I guess my point is make pretend threats that may be impossible are not a bigger concern than real threats that actually exist?
[deleted]
Blink twice if the Basilisk is holding you hostage.
thats bullying , based tho
not everything you see in the cinema is based on real life
> The planet has been warmer than it is now many times in it's history and life continued. To understand why this is a stupid argument, see this neat little comic: https://xkcd.com/1732/
artificial intelligence isn't fucking real, imagine being afraid of something you made up in your head
[deleted]
General AI isn't real, it probably won't ever be real without sci-fi level advances in both neuroscience and computing. Thinking that some shitty GAN will lead to the creation of a superhuman galaxy brain god is peak techbro grandiosity. We don't even understand the 300 cell nervous system of a nematode well enough to create a simulation that would behave like a nematode.
I think one flaw in your reasoning is that you assume we'll need to understand intelligence well to replicate it. That's not necessarily true. Our models today are black boxes and they work incredibly well, better than anyone would have thought possible just five years ago (e.g. GPT-3). Yes, there are major problems with them, and yes, we don't understand them well, but it would be shortsighted to assume that they'll always remain that way. Better to factor in AI risk than ignore it, as with pretty much all risks. I understand that the rationalist community seems alarmist about this, and I'm confident that many don't have the expertise to judge the problem carefully, but it's important not to deliberately be contrarian.