r/SneerClub archives
newest
bestest
longest
it's an ongoing delight that the LessWrong rationalist cult still considers me, personally, the source of all their problems - and not in any way their own cultish behaviour, their scientific racism, their #metoo, etc etc etc etc (https://reddragdiva.tumblr.com/post/684898588329951232/dont-like-the-way-the-ai-cult-appears-to-be)
123

The fact that there’s no evidence of AGI risk is precisely why you should be so scared of it!

Also, we’re not a cult.

Possibility that there are terrible threats out there for which we would get no lead-up or forewarning: non-zero, I guess? Reputable people seem worried about gamma-ray bursts and solar flares and stuff. Possibility that this looming threat will certainly happen in your lifetime and cause dreadful harm to you personally, but you can fix it by financially supporting nerds while they do whatever they want: may I interest you in something called "evangelical Christianity"
> no lead-up or forewarning > gamma-ray bursts and solar flares and stuff AGI is simply on another scale of implausible threat, with no evidence or events to speak of. Meanwhile NASA has recorded and studied 1000s of gamma-ray bursts and NOAA tracks solar flares as "space weather" that regularly causes planet-wide radio communication disruption. Other than that, yep, cultish exploitation racket.
Out of curiosity, do you think there *is* a risk of AI "taking over" one day? (In the old-fashioned sense of autonomous AI that is in charge of its own destiny, rather than the present-day sense of a human regime whose power rests on AI.)
There is a reason that rationalists depend on Pascal's Mugging, manipulating the math of an infinitesimally small risk of AI with an infinitely high consequence. I don't believe them. The first AGI, if there ever are any, will be "stoppable" by pulling the plug.
I think it's unwise to equate "AI takeover" with just "a single AI takes over". If it's a network of a thousand AIs making all the big decisions, or even a kind of ubiquitous operating system, it's the same thing from a human perspective. And in arguing that there is a genuine risk here, I wouldn't rely just on the magnitude of what's at stake. I always used to emphasize the contingency of a computer's goals. It would be a mistake to watch computers slowly surpassing human capabilities in all cognitive domains, and then just trust that when they *have* surpassed us, they will be guided by principles that are human-friendly or human-compatible. That is something which I think would only result by design. From that perspective, the current rapid escalation in AI abilities is alarming, because none of the leaders seem to be looking that far ahead. They're just competitively pushing the boundary of what the machines can do.
> It would be a mistake to watch computers slowly surpassing human capabilities in all cognitive domains Current machine learning is not "surpassing human capabilities" in any meaningful way, and does not imply AGI is inevitable, or even prove it's possible. Nothing close to AGI has been made. Just programs doing what people made them to do, reproducing their biases. The declaration that it's coming (and will "win" the moment it arrives) requires extraordinary evidence, and "the current rapid escalation in AI" is not evidence of anything about AGI.
>Reputable people seem worried about gamma-ray bursts and solar flares and stuff. My favorite is [False Vacuum Decay](https://en.wikipedia.org/wiki/False_vacuum_decay). The universe is unravelling at the speed of light from distant point, so when it finally arrives, it arrives without a warning. One moment, fine sunny Tuesday afternoon; the next moment, omnipresent chaos gods. I'm not quite sure how to monetize this yet though.
Monetize by claiming you starting research into zero point energy generators [add the word safety to taste]. Just claim they also prevent false vacuum decay, hit both seams of grifting gold at the same time. Any critics? Accuse them of wanting to stop you saving the world, and what you are doing is just altruistic. We lucked out that the original total recal didnt also have this. (It had going to mars, robot taxis, brain implants, and the government hunting down the 'innocent' saviour for their personal gain (he was only guilty in a previous life))

Wait do rationalists actually tithe to openAI?

Personally I’m still donating 10% but it’s mostly as a contractualism/signaling thing

I thought it's to Effective Altruism? But as someone who was previously in this mess, yes.
Yeah but EA decided that the most important thing to spend money on is AI research.
Ah, I didn't realize; I thought they were still on mosquito nets
And by AI research they mean OpenAI, not just any AI research.
That's what's funny about it. You could work on AI whose purpose is getting the world to mine bitcoin, and they'd fund that, provided you make a slight pretense of taking Yudkowsky seriously.
It wouldnt even be hard just hype on how fiat is gov run, govs are stupid and bad at doing the right thing (link to a piece yud wrote, iirc he wrore something about japans fiat problems once) and if they develop agi they surely will cock it up, so this cryptocoin/blockchain project should be a good replacement of the govs power.
I mean LWers are largely big crypto-heads, so...
That too, better living through computer touching. (I say as a big toucher myself). E: this weekend however is convincing me that the people from dune were right and I might join the Amish however.
Oh dear. I was planning to sign the Giving What We Can pledge once I have an income again. How compromised are they, exactly? Other than a couple of AI-risk charities, they seem [pretty well-behaved](https://www.givingwhatwecan.org/best-charities-to-donate-to-2022/) at a glance...?
Ok I don’t want to get folks confused—there’s effective altruism the idea and there’s Effective Altruism the organization that administers several donation funds. I’m talking about the former and the people that apply that concept and somehow end up deciding that MIRI or OpenAI are better than mosquito nets. The EA Funds—there are several options there and plenty of transparency.
I feel that funding MIRI is an extrapolation of mosquito nets type of thinking. With mosquito nets already, you have a calculation where all the other costs involved in actually saving a life are ignored - the child who's saved from malaria needs food, education, etc. the funding for which the mosquito-netter wants to redirect to mosquito nets, being upset that e.g. locals use said mosquito nets to fish because they're hungry. Now, of course, that's not quite as idiotic of an error as it is with AI nonsense, but it's much the same tendency to do some elementary school math level calculation (ideally involving a single multiplication) and then proclaim the recommended action rational and so much better than what anyone else is doing, who are simply throwing their money away at food, education, etc. - or giving money to those people who wouldn't buy mosquito nets with them - when the best intervention is mosquito nets, or better yet, AI.
I think we should be careful not to lump EA and LessWrong folks together. I’m heavily involved in EA and am good friends with top leadership. Many of them actively detest Yudkowsky and disagree with his views about X-risk stemming from AI. They’re distinct movements and the overlap is not as great as it appears on the internet
except the bit where Yud gave it its present name ofc MIRI previously had its tendrils all through EA. If you can show verifiable evidence of how they were functionally disentangled from hypothetical non-shitty EA, that would be interesting reading.
I’m genuinely curious about this as well. I won’t name names, but the leadership and broader real-life EA community I’m close to are principally concerned with global development and animal rights. That’s how it’s always been since I got involved in 2011. The folks that are interested in AI are mostly huge misanthropic iconoclastic nerds who are weirdly into crypto, are very ONLINE, and couldn’t care less about reducing present-day suffering. Those groups have little intersection in real life, even if they appear to on Twitter circles. What prominent EA voices are you referring to that explicitly endorse LessWrong/Yud?
I don't think they reference LessWrong or Yud specifically, but looking at [80000 Hours's list of the top global priorities](https://80000hours.org/problem-profiles/), they seem to think preventing an AI catastrophe is really important. Are they mainstream within EA?
Yes, 80,000 hours is mainstream within EA although there are plenty of EA folks who disagree strongly with their priority areas. The guide you posted gives a bit of a skewed perspective of what EA’ers actually care about. 80,000 hours principally provides career consulting to people interested in doing good. So they’re picking areas where the marginal value of an EA’er going into that area is highest. There are already lots of passionate people doing animal rights and development work (often with more specialized knowledge than a generic smart EA’er) so getting an effective altruist to pursue those careers doesn’t add much marginal value to the world above and beyond replacement.
That makes sense, thanks for clarifying this.
[deleted]
> if you accept the precepts nice of you to mention the truck-sized loophole at the end there
I would draw the line at thinking AI safety is an actual problem worthy of a single cent of philanthropic spending, especially on the technical front. Versus applying EA methods towards real problem faced by real people now.
'Signalling' this seems to me like it only works if you tell everybody about it. As a vegan I approve

one single power user, David Gerard

eight rationalists wedgied for every dollar donated
Shut up and take my money.gif
It's hard to keep it up -- improvements in algorithmic efficiency, but inflation and those damn crypto bros buying all the GPUs... but we're committed to this. It's like a stablecoin.
backed by gold, comedy gold
to be fair, it IS all your fault. and i know that from LOGIC
attack of the 50 foot dgerard
\*zip\* \*thunk\*
Ow god it is made out of feet.
Same energy as "A powerful rat, named Charles Entertainment Cheese"

The “I’m donating based on principle, despite having no faith that it will accomplish anything“ take is at odds with literally every principle of Effective Altruism/Rationalism. How this community hasn’t had some great schism/flamewar/collapse yet is beyond my understanding.

They had schisms but it was about the other opinions, that is what created (the now defunct) 'more right' rightwing lesswrong spinoff site.

that’s pretty amazing, when you think about it. ONE GUY is singlehandedly more effective at defining the rationality movement than the entire rationalist community put together.

I think, if I were a rat, I wouldn’t even be mad at this. I’d sit down and take notes.

Sounds almost like... \*puts on glasses\* systematized winning.
[Hey!](https://i.kym-cdn.com/entries/icons/original/000/017/204/CaptainAmerica1_zps8c295f96.JPG)

How do you have spare time destroying both lesswrong and bitcoin at the same time?

if you hit lesswrong and ethereum at the same time they're increasinlgy linked
lesscoin bitwrong
uhhhhh slate *moon* codex slate moon coindex scott alexander siscoin

I may be overthinking this, but are they using “better at PR” at code for “wrong”?

They’re blaming the rather negative conceptions of their community found online on the trickery and deceit of some dastardly foe (which naturally could not be imitated or surpassed by our virtuous rationalist hero Eliezer Yudkowsky) instead of accepting any legitimate criticism of their ideas or admitting that internet rationalism is a fringe, relatively unpopular fanbase.
It was me. I'm the dastardly foe. I am the electronic Satan at the core of the internet hell, spinning the threads to destroy everything. I'm making an AI based on Roko's Basilisk, except it's going to focus all its energies on imprisoning goofy lesswrong nerds into an eternal pain vortex.
they are so desperately unconvincing that anyone who is more comprehensible than them is automatically suspect
Taps into the rationalist distrust of people they don't consider their intellectual peers but have to admit are better at social skills than them; see also misogyny, incels, and the first and second Geek Social Fallacies https://plausiblydeniable.com/five-geek-social-fallacies/

This does make me wonder:

Google results suggest “lesswrong cult” about 90% because David Gerard has spent over a decade promulgating the notion that LessWrong is a cult, and he has the pretense of legitimacy … [and more the whole accusations you are abusing wikipedia for your crusade]

I assume this kind of stuff and the other accusations are against the wikipedia rules and if proven could get you kicked off those projects right? Have they ever tried to go after you that way?

(They just admitting that actual experts are not convinced and disagree was funny however. I image those debates are a bit styled like real physicists debating prepetual motion or flat earth people. I know of at least one AI researcher who flat out gave up debating with agi doomists because it was useless and a waste of time).

E: lol

I would put it to anyone who still takes Big Yud, Scott Alexander, and the rest of them very seriously that the whole stated purpose of SI/MIRI/whatever they’re called this week is to try to make people take “AI risk” seriously. If after well over a decade of trying to do this, while being provided with almost unlimited funding by billionaires, their best efforts at this have been thwarted by one bloke with a Wikipedia account, that suggests that they’re not very good at their job, and maybe one should take that into account when looking at their assessments of the state of the world.

Source

yeah, if they had a case they'd have made it
Lol, saw that somebody tried to pull the 'wikipedia is anti coinbase due to you!' on your blog. You are a one man dark age. ;)
love to live in the gap caused by the dgerard dark age
At least the music will be good.
They did get dgerard indefinitely banned from editing rationalist adjacent wiki articles, on the grounds, I believe, that rationalists hate him, therefore it's a rivalry/public feud.
Ah right wasnt aware. And that is centrism for you. Clealry dgerard should have use the kolmogorov option while being anti all this. (That makes no sense to me tbh, a feud? I would be more worried about conflict of interest due to him publishing on it).
Took me forever to find the dirty laundry, [here](https://en.wikipedia.org/w/index.php?diff=1010877251#Propose_topic_ban). I misremembered, it's only Scott, not all of rationalism. While looking for that, I incidentally discovered that apparently dgerard is also banned from using admin tools on transgender articles...because he believed Chelsea Manning "too quickly" and stopped people deadnaming her.
Did I mention centrism already? E: lol pro lw people going 'im pro lw so a bit biassed but yes he should be banned'. The thread is also fun to read to see them argue 'david said he was a scientific racist and encouraged reactionaries' which we know to be true because scott himself said so. (Actually they first said 'he said he was a neonazi' so lol at all the levels of hyperbole and lack of (self)awareness).
Yeah, the arguments sure seem like a lot of pearl-clutching about propriety. > a NYT article described by other sources as a "hit piece". I bet these conveniently unmentioned other sources would be considered unacceptable to add to the Wikipedia article, but are miraculously acceptable sources for banning someone from editing it.
Yeah at least somebody brings this up down the line but urgh.

The real AI risk is that computers are idiots and the people who maintain them and program them are also idiots (this specifically includes me, I am an idiot), but normal people think that both of us are smart

I can only hope and pray and quiver in terror that you use your awesome powers for good.

don't worry, every post I make here increases existential risk

Thank you for your service.

It’s nice to be famous.

Cult thinking /and/ a persecution complex. What could go wrong?

So, what you and they are saying is, you’re basically a Beowulf-cluster Skynet in a trenchcoat, right?

^(……. who are you?)

https://foreignpolicy.com/author/david-gerard/ https://podcasts.apple.com/us/podcast/82-scott-alexander-slate-star-codex-with-david-gerard/id1449848509?i=1000512959759

Do you think most people think less wrong is a cult? I’d be surprised. Also surprised it comes up in Google’s autopopulation of the search.

I think most people have not heard of less wrong
Let me rephrase, most people who have heard of less wrong don't think it's a cult
I think you're probably right, but like, only in a way that doesn't matter There was a time, not that long ago, when very few people had heard of NXIVM, and among people who *had* heard of it, probably a majority did not think it was a cult. Then suddenly lots of people learned what it was at the same time as it was being revealed as a cult. You could have said, in like 2012 or whatever, that "most people who have heard of NXIVM don't think it's a cult" but that has more to do with who hears what kind of things, and the benefits of a low profile, than it does with whether the organization was or was not a cult. (It was) That said, I personally do not currently think Less Wrong is a cult. I think it is a blog.
Most people assume by default assume any obscure random group they heard about in passing isn't a cult, and don't care enough to form an informed opinion either way about less wrong specifically. What is this supposed to prove?
Personally I don't think it is a cult, but it certainly has cult like aspects (and a few offshoots are very cultlike), but like everything 'is X a cult' is a spectrum, and this spectrum also differs from where you are in the community, just as some weird self help groups (not just scientology, in fact they are the exception to this) can be very cultlike on the inside but on the outside some people can get some value out of it. It certainly has a weird christian religion aspect in it, and in the same ways as Christianity, it also has cults. Dont think the whole 'is it a cult' or 'is it a religion' is that worthwhile in the end however. Strictly trying to define something and then missing the forest for the trees is a bit of a sticking point. And the whole question of what others think also isn't that valuable imho.
Yeah, I don't think "Is LW a cult" is even a productive question because it mostly comes down to an argument over the definition of "cult". Adjusting your criteria so that LW does or doesn't fit isn't meaningful.