r/SneerClub archives
newest
bestest
longest
Nuclear war, climate change, bio weapons, masa shootings, and I’m scared of auto predict! (https://twitter.com/liron/status/1655811188557758464?s=46&t=Jh6AFED-yfbLv5W34trf7g)
48

I generally don’t get scared by charts with no data and no research.

Succinct

The real question is: Do you fear charts with clip-art trains on them?
Trains? No. Trolleys? Yes.
With or without "doom zone"?

“here’s a more accurate diagram.”

OBVIOUSLY the choo-choo train 🚂 of technical progress is headed toward the DOOM ZONE. Why can’t you deniers just accept that?!?!

Thankfully for us of all the apocalypses the doom zone [has the best music.](https://youtu.be/Jm932Sqwf5E)
In terms of sound track synergy it was a real missed opportunity for him to not call it the "Danger Zone" instead.
i won’t worry and i won’t fret / ain’t no law against it yet

Just like a PragerU graph!

But the y-axis needs to say “Marxism.”

Maybe a few more crudely-drawn Joe Bidens in the Doom Zone.

Our best and brightest MSPaint users are forecasting a dire future

“[Mr.] Yudkowsky explains minds are basically computers and vice versa and everyone in every discipline understands this fact.”

Kids, don’t wave your hands anywhere near that hard/fast at home!

Minds are like computers the same way legs are like cars, prove me wrong.
Legs are like cars the way centipedes are like car collectors. Centipedes are like car collectors and will soon collect all the cars. There will be no cars left for anyone else. This is my counterintuitive thesis; if you do not understand it, you need to update your priors extra hard and then steelman it until you follow my logic.
In that all 4 can just kind of rise up and start murdering us all on their own one Tuesday here or a month back? Hard agree, its damn near tautological.
I don't agree with your reasoning but you agree with my point, so I will make vague reference the fact that you agree constantly.
Fair, however, the reasoning is undisagreeable: Who's to say there isn't a person in our legs? That they could usurp other body parts to remove themselves (fatally?!), and go about reproducing infinitely with no concern for humanity at all. Worse, they could be hostile instead of neutral. What if they made very tiny legs? What then? Huh? Exactly, checkmate. I wrote this TED talk with no preparation, and everyone around me has been clapping continuously since I hit reply.
Good old "Anything can be broadly compared to the equipment I work with regularly, so everything *is* that equipment" brain. Some people thought the world was like a wheel, some thought it was like a book...
Computers had a proof that they can do literally every possible computation back when the word "computer" meant "a person mechanically doing calculations on paper", wheels didn't. Sure silently assuming computationalism in philosophy of mind without even considering other options is a stretch, but " hurr derp nerds think everything is a computer because they don't touch grass" is a lazy take.
> literally every possible computation this is a really, really good example of question-begging, thank you for your service!
Yeah, to be fair, i'm kind of projecting my own bad habits i'm trying to curb here
On the other hand, I'm going for malice (deliberately hiding the assumption of computationalism, and hoping the reader doesn't catch on) while yours only requires incompetence (picking the most complicated device around and saying that it must be the same as a human brain), and I have no idea what's more likely with AI cultists.
What proof are you referring to?
[Church-Turing thesis](https://en.m.wikipedia.org/wiki/Church%E2%80%93Turing_thesis) Yeah, I know that the most interesting part can't actually be proven, and that it is possible that someone invents physically realizable supercomputation; it's just unlikely enough for me. I don't see any problem with assuming that, as I'm not aware of any theory of consciousness that would require physically realizable supercomputation. The ones I know either abandon physicalism altogether, or postulate a reason why a human simulated on a sequential computer would be a p-zombie (like [Integrated Information Theory](https://en.wikipedia.org/wiki/Integrated_information_theory)). Edit: okay, I forgot about [Orch-OR](https://en.m.wikipedia.org/wiki/Orchestrated_objective_reduction).
> Is there a person in there? We just don't know. Said about gpt-4.
Only one, and his hands are getting really tired.
This is not a fun sneer. It angers me. Greatly. Or rather, my GPT-X is telling me there is anger to experience. Is there a person in me? We just don't know. What I do know is that's maybe the most royal We I've ever heard. How does Mr. Yudkowsky respond when (assuming I'm not near the first) total laypersons want to engage him on these issues? I mean, I don't even understand how any of this stuff is even remotely defendable against a literal 5 year old; the aether itself seems vocal enough to do it, f f s.
Waving your hands that hard is basically the same thing as flying, and vice-versa, and everyone in every discipline understands that fact.
"And vice versa" is doing a lot of work here
Ya, but not near as much as my insisted honorific. Its not like this dude has been wearing a different dress to the Mirage 4 shows a week for 30 years. But that's just a trick of Twitter, I can imagine how his pamphlets would have read.
I know this is a basically no learns zone but I'm curious, are you saying the computationalism in philosophy of mind regarding brains is false?
It's more like the twitter poster is not saying it is true in any way that meaningfully contributes to their point.
Ah hence the hand waving.... missed that when I initially read it. Thanks.

My friend once told me that when she was 5, she asked her dad how nuclear energy works. He said, when you keep splitting a thing into two parts, you will reach a point where splitting it one more time will create a big explosion that destroys everything.

So she went to the garden, grabbed a flower, and started ripping it into smaller and smaller parts. When she got to point where it was too small, she split it one more time. Then she started crying and had a breakdown because she thought she had just destroyed the world.

She understood nuclear fission about as well, as some of these doomers understand AI (actual AI researchers on the AI doom train excepted of course).

Diagrams that look like shitposts

Hahahahahahahaha How The Fuck Is Rokos Basilisk Real Hahahaha Nerd Just Walk Away From The Screen Like Nerd Press The Off Button

I’m intrigued that the board game AIs move further to the right as they get more powerful. How does AlphaZero “represent more outcomes” than AlphaGo my dude? As far as I’m aware there are still only three possible outcomes of a Go game

I think he means that AlphaZero is more generic - it can play arbitrary games, rather than just Go. So it's true that it can represent more game states than AlphaGo can. The plot is kinda nonsensical though because "optimization power", whatever that means, should be more or less the same for alphago, alphazero, and muzero - they all use monte carlo tree search. And it should probably be *higher* for Stockfish than for the others, because Stockfish uses sophisticated heuristics in addition to monte carlo-type planning. EDIT: It also doesn't make sense to put LLMs on the plot at all; they have no inherent "optimization power". You can use any kind of planning algorithm when you represent the state space with an LLM.
Oh yeah, that makes sense. But yeah Stockfish should probably be higher than the monte carlo neural network type stuff since it's actually doing game tree search, not monte carlo, so it's actually _better_ at optimizing (i mean maybe it does monte carlo as well, i'm not like an expert on the internal workings of stockfish. but it does consider all moves from each position whereas to my understanding go AIs don't)
"Monte carlo" in this context is just game tree search, but some branches are prioritized more than others if you don't have time to visit them all. The neural network determines the prioritization. I think stockfish does that *too,* but its special sauce is that it also uses hand-coded heuristics to inform the tree search prioritization, whereas the various Alpha- and mu-AI's deliberately avoid doing that, because they're supposed to be proofs of concept for generic AI.
er, right, bad phrasing on my part, but stockfish to my understanding looks at all branches of the game tree (w/ pruning) vs Go AIs which use neural networks to decide which branches to examine since there's way too many possible moves otherwise. (again unless I'm wrong about how stockfish works, but to my understanding this is why stockfish isn't vulnerable to the same kinds of adversarial attacks that were able to defeat KataGo etc)
No idea about MC, bu Stockfish has a shallow NN for scoring individual positions since some time already (still using a tree for global optimization) and is consitently beating Leela (an open source AlphaChess clone).

AgentGPT is not going to go rogue, they need to be hosted on a computer and are far to large to become a “virus”. The best way to do something like this would be to infect computers that already have a way to run LLM’s (like oobabooga).

At that point all that would be required would be administrator access (hard to get), and just a command line execution… and then they would need to not notice it running in the background taking up 70% of their GPU lmao.

But the agi would be supersmart so it would just know how to hack the task manager (or your local equivalent)! Big smart beat little smart ('Just' is doing a enormous amount of work here).

Liron has substantially gone off the deep end. He seems to be sincere, but MAN his correct interpretations of crypto seem in retrospect to be a bigger and bigger fluke.

This guy started off "promoting" his own tweets on Twitter to grow an audience, shifted to anti-crypto stuff in order to keep growing it, and has now switched to "AI risk" as a new approach. The sneers are fine with me, but I don't like handing grifters free attention.
[deleted]
He spoke a lot about not having good usecases, and looking at profitability of web3 companies. It wasn't revolutionary stuff, of course, but it happened to be correct. He's doing a bit of the "I was right about crypto, so you can trust I'm right about how an AGI is going to 'brick the universe' essentially the moment it becomes self-aware because everything I learned about thinking I learned from Eliezer Yudkowski." (This last bit something he actually told me.)
[deleted]
Indeed. The fluke was that he was right. :)

I believe LLMs (maybe with the help of a bit of extra code like AutoGPT) are getting close to planning arbitrary actions

Shhh nobody tell him about decision transformers, he might not recover from that.

DOOM ZONE

everything that has a beginning has an end.

There are no units on the graph so how can he tell how close we are to the ‘doom zone’ if a sizable ‘doom zone’ exists at all?

Edit: I am not even good at math, and I know useful graphs must have units. Shouldn’t the twitter user know that too?