r/SneerClub archives
newest
bestest
longest
AI safety workshop suggestion: "Strategy: start building bombs from your cabin in Montana and mail them to OpenAI and DeepMind lol" (in Minecraft, one presumes) (https://twitter.com/xriskology/status/1663910389061484545)
45

At the start of the tweet thread I thought “Yudkowsky sure is giving off unabomber vibes”, which I dismissed as being the product of my internet-poisoned cynicism, but then I got to this part: > [The AI safety meeting minutes] also included this line: “Solution: be ted kaczynski.”

Oh boy.

I do not foresee this ending well.
What's the fun of running a cult if you can't inspire fanatical terrorism? It's almost a straight shot from "pure utilitarianism focused on the far future" to political violence. He's Ozymandias from The Watchmen.
He is the reverse Ozy. 'Do you think I would tell you this I you had any chance of stopping me? I have not even started!'
True. Ozy was willing to actually do the thing he thought should be done and did it personally. With that in mind, it seems out of character that he suddenly got insecure and asked The-Demi-God-With-The-Blue-Dick for reassurance. He seemed invincible to doubt until he suddenly wasn't. Yud is so scared of failure he can't get that far into this.
The Unabomber was massively more intelligent, principled, and thoughtful than Eloser will ever be.
> was How did you know

A couple months ago (I think at least, time is an illusion after all, and I just had lunch), I mentioned that I was reading more and more deathcult like undertones in Yuds writing and it was worrying me. Im a bit more worried now.

(So to keep it on a lighter note, people here might be amused to learn of the AI Wars series, part 1 and Part 2, where you play a group of spacefaring humans trying to free yourself, and your local galaxy from the influence from an AI which has won the war. (The AI doesn’t really care what you do, as it is way to large to really pay attention, so an important part of the game is making it not notice you))

[deleted]
Without a doubt, at least some of them will self-harm because of Yud’s hopeless doomerism.
I just finished Raven and I’m right with you on that.

Honestly if one of you thought there was a 20%+ chance of species-wide extinction in the near future because of AI developments, wouldn’t violence/terrorism be a live option for you? It would be for me. It seems premature to write off every kind of violence as the sort that would only make things worse in so dire a situation. Obviously it would be wise to write it off publicly like most of them are doing, though.

> if one of you thought there was a 20%+ chance of species-wide extinction in the near future because of AI developments, The issue is, no reasonable person thinks that because it's stupid
This is the exact problem with their "Bayesian" reasoning in that it makes them convinced that taking radically destructive action is worth it for such a hilariously contrived scenario
> there was a 20%+ chance of species-wide extinction in the near future because of AI developments, What's your evaluation of the risk of extinction due to climate change? What are you doing about it?
Climate change is a huge issue but the risk of *human* *extinction* posed on either a narrow or broad view of its consequences (the narrow view considering only its immediate consequences in a mostly isolated sense, the broad view considering its immediate and secondary consequences \[among which considerations about interaction between the effects of climate change and other world-endangering threats like nuclear war and whatnot probably figure heavily\]) still seems pretty damn small to me. We're going to suffer because of climate change but, like usual, it's going to be the people in the less-than-fully-developed countries who suffer by far the most. None of this is to say that climate change isn't a huge issue because it doesn't put us at significant risk of extinction or, even, that terrorism and violence shouldn't be live options for responses to climate change.

“Hey, shouldn’t we consider violence in the face of existential threats?”

“You mean like, against capitalists, whose resource hoarding is accelerating us towards five different kinds of societal collapse?”

“No, like against GPU enjoyers”

(By violence I mean tweeting, not, say, public execution)

"Privately owned infrastructure is endangering us all, perhaps to the degree of extincting humanity. Shouldn't we just go out and destroy it?" "Well, in theory, yes we should, but there's a huge amount of fossil fuel infrastructure, it's very well guarded, and there are few people who would take that risk. Then of course that kind of adventurism usually results in public backlash, so it might not have any effect overall." "Uh, I mean we need to blow up a server farm."

“why is violence a taboo”
- an online terrorist 2023

I will confess ‘Unabomber but not actually good at math’ was not on my bingo card.

“Screw your optics, I’m going in” - Nick Bostrom

Speaking of which, apparently his absence from the most recent AI doomer petition might be deliberate: https://www.lesswrong.com/posts/HcJPJxkyCsrpSdCii/statement-on-ai-extinction-signed-by-agi-labs-top-academics?commentId=H4ti6iGutDbcZ3uwq I guess the some of the doomers are being extra cautious about the PR risk he presents.
good thing they found much more renowned AI theoreticians such as Grimes

Strategy: start making perfect simulations (in Minecraft) of the people with bad approaches to AI and torture them after letting them know you’re doing this until they stop accelerating the apocalypse!

The Redstone Risk Research Foundation