r/SneerClub archives
newest
bestest
longest
Getting to the point where LW is just OpenAI employees arguing about their made up AGI probabilities (https://www.lesswrong.com/posts/DgzdLzDGsqoRXhCK7/?commentId=mGKPLH8txFBYPYPQR)
58

Ah, but they’re admitting the probability is NON-ZERO, and we also know that once AGI exists, the probability of AI DOOMSDAY is 100%. Chessmate, forecasters! The acausal robot god is coming for your souls!

This makes me wonder, so prob is non zero, combined with multiworld theories. So there is a guaranteed future that has AGI, AGI is superintelligence so figures out how to cross into different worlds, ego everything being paperclips is unavoidable.
The grossest thing about Yud’s “we’re all going to die” schtick is that he keeps illustrating his despair with references to kids, including that one time with a reference to one of his partners’ kids, and I’m not sure Yud should be around other people’s kids.
Ok seriously thats a Jim Jones thing again. For real.

They don’t even include “Shadow groups of elite Rationalist ninjas assassinate top AI researchers” in derailment events, smh. I put the probability of that occurring at 12%, which lowers the probability of AGI by 2043 to ~0.35%

So I tried to explain Rat thinking to someone with a background in statistics and said "OK so these people basically believe you can do Bayes theorem on anything and you just kind of plug in whatever number your gut tells you is right" and she just put her head in her hands and groaned.
That sounds like the appropriate response.
Isn't there a better word than "research" for what these people do? I mean, has it just lost all meaning? Am I researching during my xhamster time? What about eating, am I researching the hamburger? "AI researchers", idk, sticks in the craw goddammit.
[I was referring to the post/thread discussed here](https://www.reddit.com/r/SneerClub/comments/13wsm1h/ai_safety_workshop_suggestion_strategy_start/), fwiw. The hypothetical targets would be engineers or other people doing more practical machine learing/AI research or work, not the people churning out speculative fiction.
Im very confused by the math btw, lot of these percentage chances seem to depend on each other. You cant just go 'chance my brain gets hit by a bullet 10%' 'chance heart gets hit by bullet 15%' etc so Inonly uave a 1% chance of death if you shoot at me. Im pretty tired atm so have not looked at it properly l, but the whole calculation feels off. Also problem with your ninja odds, agi would be open source so stallman with his sword and linus with his nunchucks would defend them. So that increases the risk of failure of the ninjawrongs. The ninjawrongs also have Eric S on their side with his guns, so that increases the risk of ninjawrong failure even more.
They're joint probabilities -- you smoke a joint and make up some probabilities. But yes, they assumed everything was independent, so the calculation is just P(e_1) * P(e_2) ... etc. They give some justification for use of unconditional probabilities but I didn't look at that too closely. I was partly joking that the result is sensitive to the number of conditions. For example, with 10 conditions, if you give all conditions a 90% chance you get a probability 34% (0.9^10 ). With 20 conditions and all conditions have a 90% chance, it's 12% (0.9^20 ).
> They're joint probabilities -- you smoke a joint and make up some probabilities my god

In case anyone TLDR’d it and needs some context, the author of this piece is a machine learning engineer at OpenAI. He has a PhD in applied physics from Stanford.

In case you needed one, this is a good reminder to never feel insecure for not getting a PhD or not attending a prestigious university.

The author raises one point that I’m surprised I haven’t seen before:

We avoid derailment from wars (e.g., China invades Taiwan) | 70%

If there’s one very plausible thing that’s guaranteed to stop AI dead in its tracks for years, it’s a war over Taiwan that causes the destruction of TSMC.

I wonder why rationalists don’t talk about this more often? Yudkowsky even advocated for bombing data centers, yet he didn’t feel tempted to suggest that they do a 4D chess xanatos gambit to provoke a China-US war over Taiwan? I would have assumed that sort of thing would appeal to him.

Iirc the usa has plans to bomb the chip factories if Taiwan ever gets invaded. Setting up new ones would take decades.
\> yet he didn't feel tempted to suggest that they do a 4D chess xanatos gambit to provoke a China-US war over Taiwan? That's because after he analyzed 14 million scenarios, that's the only one where we win, but if he tells us about it, it won't happen.
The whole point is he advocates for something we don't do and then moves the goalposts the next time people get surprised at AI and claims that this is why we should have bombed the data structures. If we do what he wants us to do, the dog catches the car.
Exactly this. The LARP can’t ever get too real or else he wouldn’t be able to continue playing make believe.
> I wonder why rationalists don't talk about this more often? Yudkowsky even advocated for bombing data centers, yet he didn't feel tempted to suggest that they do a 4D chess xanatos gambit to provoke a China-US war over Taiwan? I would have assumed that sort of thing would appeal to him. What's the current rat groupthink on China? They might figure that this is likely to happen anyway; "China is going to try to invade Taiwan within the next few decades" isn't that uncommon a view.

It saddens me to know I met one of these people and they seemed like a decent fellow. But seeing him in full-on cult mode makes me sad.

So in the section about their credibility, Ari lists some things they predicted correctly that were pretty obvious — such as the idea in 2020 that COVID would be a pandemic, and that mRNA vaccines would work, but also that level-4 self driving wouldn’t be in place by 2021 — but also bragged about flipping an “abandoned mansion” for 5X the price, all as reasons to take their AGI interpretations seriously.

But then follows that up with this paragraph:

“ Luck played a role in each of these predictions, and I have also made other predictions that didn’t pan out as well, but I hope my record reflects my decent calibration and genuine open-mindedness.”

Super weird to list a handful of hits, then acknowledge a bunch of misses happened but not give any information about what those misses were, and claim that they’ve established a track record.

Like, if you’re attempting to make an argument for the statistical likelihood of something, weird to act as though you could come to any conclusion of your own reliability by saying “I made 6 correct predictions and an unknown number of incorrect predictions, which is pretty good, right?”

Yeah that was a very eyeroll worthy part. Also a good example of why all this focusing on being a superforecaster is weird and flawed to anybody not in the cult. (Made even better by the fact that if you have a lot of people betting randomly on a lot of events eventually somebody will be right a lot).

How come science has been replaced by speculation? By people who worship science xd

The people from LW and SlateStarBS would be well advised to heed the iconic words of Mark Twain about yammering on endlessly:

The more you explain it, the more I don’t understand it.

I just wish these people would just shut up for a couple of weeks.