r/SneerClub archives
newest
bestest
longest
83

For new comers here I thought it might help to explain some of the extensive jargon used by the extended rationalist community. Without further ado:

X-Risk: This is probably the most used term across the rationalist community. In theory it’s supposed to refer to any existential (human extinction) risk, in practice it refers to existential risk created by a strawberry molecule maximizing AI that decides to turn the world into delicious strawberries patches via Drexeler nanotechnology. Rationalists aren’t concerned with minor risks such climate collapse, ecological collapse, or nuclear war since these could potentially leave enough breeding pairs for homo sapiens to continue (and eventually reach the longtermist goal of trillions of simulated happy humans).

S-Risk: This is like X-Risk but instead where either the AI or simulations of human beings are eternally tortured for eternity. Basically like hell but with more cryonics, simulations, and tech nerds. Will be applied to individuals who don’t worship the AI god. Maybe the strawberries will contain simulations of humans being tortured? Who can say

Bayes Points: These are imaginary internet points that rationalists use to determine whether someones world model (aka belief system) is accurate. Somebody should probably turn this into a cryptocurrency (and give me a 10% stake).

Epistemic: The number of times you can use this word per sentence is directly correlated to your IQ. Bonus IQ if you manage to use it more than 5 times in a paragraph.

Bayesian Statistics: The way the rationalists use Bayesian statistics, this simply means taking in new evidence to update your beliefs, because just saying that i incorporate new evidence and update my beliefs accordingly is clearly not rational™ enough.

Priors: These are the sub beliefs you hold that lead to your conclusions. Example use in a sentence “I’ve Updated my Priors as a result of epistemic reflection on new evidence regarding AI X-Risk due to the Waluigi Hypothesis”

Translation: I’ve changed my beliefs about when the AI God will kill us because of a blog article i read but don’t really understand that used epistemic many times.

FOOM: This is where the AI Strawberry Maximizer will recursively self improve it’s own code despite the law of physics and basic logic to become the AI Strawberry Maximizer God in order to fulfill it’s terminal goal of turning the world into delicious strawberry patches.

AGI: Refers to AI that can accomplish any task a human can (with the exception of all of the things that really matter like, you know, plumbing, carpentry, building)

Timeless Decision Theory (TLD): A branch of decision theory/game theory built for those who don’t like peer review, formal education, building on prior work, or any mainstream philosophy or mathematics. Invented by a self proclaimed “genius” that never graduated high-school, has no formal education and has never been published in an academic journal.

The Sequences: Much like and by the same author as Timeless Decision Theory but longer. Used to gatekeep people from criticizing AI X-Risk “Your arguments are invalidated by the sequences please read that”. The Sequences are said to be over 100,000 words, but that may just be the furthest anybody has gotten before dying of boredom.

Rokos Basilisk: A future AI Super intelligent God that due to the implication’s of Timeless Decision Theory (which it will obvious use because of reasons!) eternally tortures simulations of people who found out about the possibility of ASI but didn’t donate all of their time and money making it happen.

Rationalism: The belief that one can eliminate human cognitive biases and errors in favor of systemically rigorous mathematical thinking. In reality it just involves using complex jargon for simple concepts (see “Priors”) to make sentences sound smart.

Longtermism: A formal of radical utilitarianism that holds that we should be maximizing total utility in the universe (note that utility is rarely if ever formally defined) regardless of when that utility occurs. Practically this means focusing research and development efforts on making a Friendly AI as opposed to curing cancer, solving climate change, improving third-world living conditions etc. The longtermist dream is trillions of simulated sentient beings using Dyson spheres spreading out across the universe because that’s clearly a future EVERYONE would want.

Effective Altruism: The philosophy of maximizing happiness in the world through ensuring that gifts to charity are mathematically rigorous and defined. At first this involved creating a lot of mosquito nets at the expense of all other forms of charity. Has largely been hijacked by the AI X-Risk movement. Now involves using donation money to hire personal assistants so EA’s can focus their time on thinking really hard about the potential actions of a future super-intelligence.

The Sequences are around a million words.

Sorry I think normal numbers are a tool of oppression by The Cathedral. Can you express that number in hpmors?
nearly two!!

Missing:

acausal

robot

god

So if the strawberry patches are happier and lead better lives than other organisms, that’s a net positive from an EA pov, right?

It would eliminate suffering, mostly because strawberries lack nerves or the ability to be super scared of existential hypotheticals. Or maybe I'm wrong and need to update my priors and accept that strawberries are actually very emo.
Imagine a strawberry with its little leaves as hair. I can definitely see an emo strawberry being possible. Edit: Google emo strawberry it's kind of awful looking
Until we have strawbemo music I will never be impressed. Everyone knows the only important part about emo is the music.

FOOM: also, the actual sound that an AI superintelligence makes as it ascends to godhood.

If you ever hear a loud “FFFOOOOOO-” coming from your server room then you should smash the “stop training” button on your wall as quickly as possible to stop the AI model before it escapes.

Rationalists aren’t concerned with minor risks such climate collapse, ecological collapse, or nuclear war since these could potentially leave enough breeding pairs for homo sapiens to continue

…which will eventually lead humanity to be capable of creating another superintelligence down the line, thus necessitating another nuclear war to prevent the next superintelligence from forming, and so on.

the circle of life :)
It'll keep us busy, at least. Robot Heaven forbid we get bored or, worse, comfortable.

Eliezer Yudkowsky has, sadly, been published, at least in an academic conference. I haven’t read the paper, but the concept they introduced has actually been useful for me. He’s just the fourth author which might explain it.

And regarding x-risk and climate change, do you think there’s a non-negligible chance of climate change killing humanity? I’m worried about it like you, but I think Rationalists differ from us in their prioritisation, not in their analysis (in this very specific case).

>do you think there's a non-negligible chance of climate change killing humanity? Sure, 'climate change' as just the phenomenon of rising average global temperatures maybe doesn't, but this is not how it confronts us in the actual world. We feel it as storms and floods and sea level rise. Droughts and crop failures and heat waves. These are things that intensify competition for resources and stoke wars. Unless we radically restructure our institutions of government and production, it cannot be just 'climate change'. It is food fights and water wars. In those conditions, eventually, some one violent and desperate enough is going to have motive, means and opportunity to engage in nuclear war. Not to mention it all makes pandemics and the general severity of natural disasters will continue to increase essentially for the indefinite future. Seems to me it's basically meaningless to try and extract climate change, or really anything like this, as a discrete risk with these sorts of timescales and criteria. It's all to deeply interconnected and contingent.
I'm too lazy to look for them, but there are posts on the EA forum that lay out this view. Of course, they're not that popular, seeing as acknowledging that reality is complex would make the "project" more difficult.
To be really honest there are all kinds in academic research. If you ever feel bad for using research by someone bad or stupid, imagine being a mathematician working with Teichmueller spaces. At least no-one at MIRI was a nazi party member who got himself killed on the eastern front.
I'm indeed a Jewish mathematician who feels bad whenever Teichmüller is mentioned, how did you know?
Lol. Its a perverse kind of irony that now you have the Grothendieck-Teichmüller group.
Wait, wasn’t it mandatory to join the party at some point? I assume he was higher up though.
Teichmüller was really fanatically antisemitic and signed up and campaigned actively in various ways cuz of that.
Afaik it was never obligatory, but there were cases where people were "unknowingly" party members. In his case, however, definitely not and he was not a simple "Mitläufer" either. He joined the party and additionally the SA already in 1931, before Hitler's seizure of power. He was also a prominent member of the National Socialist student group at his university, which, among other things, organized a boycott against their Jewish professor.
99% of species have gone extinct. It is a symptom of terminal Protagonist Syndrome to blithely place one's self in the 1%
I don’t think climate change will wipe out humanity on its own (probably reduce our numbers significantly though ), but it could theoretically happen as opposed to the AIpocalypse which couldn’t.
Mostly it's just going to cause a lot of misery that we could have prevented if we weren't ruled by short-sighted financial concerns, which would validate the long-termist position to an extent except that those short-sighted concerns created this problem that is already affecting us and there's very little reason not to do something about it now.
I mean, there have been multiple significant climate shifts (+-5C in less than 10 years) in earth's history, and there is a [huge concentration of methane on the East Siberian Arctic Shelf](https://www.pnas.org/doi/10.1073/pnas.2019672118) that looks increasingly possible to suddenly release over a [time scale of months/years](https://www.ecowatch.com/siberia-sea-boiling-methane-2640900862.html?utm_campaign=RebelMouse&socialux=facebook&share_id=4962695&utm_medium=social&utm_content=EcoWatch&utm_source=facebook).
Yeah, I've also been worried about the methane in the Arctic for some time now. But humanity'd still survive 5 degrees, I think? We'd have nice tropical beaches near the poles and we won't have to think of the billions who had died along the way or something?
5 degree rise will make vast swathes of the planet uninhabitable for human life. The entirety of the Gulf states to start.
I think his point is that a few million would survive, which by definition also means billions would die.
Even disregarding that. Say that somehow people managed that with great difficulty. Warmer oceans will play havoc on phytoplankton growth, less will grow and they won't be as nutritious for the sea animals that eat them. In turn, that means less fish and the fish that do exist will be less nutritious for us.

Would it make sense to think rationalism is theoretically possible from a materialist point of view, but that rationalism is 100% impossible in this day in age with our current technology and science? And that anyone who says it’s currently feasible is a liar?

Eliminating all bias (non trivially) sounds impossible regardless of the level of technology, it's an essential part of the human condition, and possibly an essential part of intelligence. Importantly, for Rats eliminating bias often doesn't mean identifying flaws in their thought (evidently), but contesting the "peer pressure" and nefarious influence of the "Radical Lib" mainstream education and media, trying to tech us ethics and human decency, how dare they!! So "de-humanifying" humans is outside of current (and probably future) tech but, IMO, permanently a bad goal. Anyone who says it's feasible might be an idiot rather than a liar :-P
They always seem to fail to realize that an absence of ethics and human decency IS an x-risk.

I laughed. See my thread on Jargon on SSC

Epistemic: The number of times you can use this word per sentence is directly correlated to your IQ. Bonus IQ if you manage to use it more than 5 times in a paragraph.

Epistemological? What are you talking about??