For new comers here I thought it might help to explain some of the extensive jargon used by the extended rationalist community. Without further ado:
X-Risk: This is probably the most used term across the rationalist community. In theory it’s supposed to refer to any existential (human extinction) risk, in practice it refers to existential risk created by a strawberry molecule maximizing AI that decides to turn the world into delicious strawberries patches via Drexeler nanotechnology. Rationalists aren’t concerned with minor risks such climate collapse, ecological collapse, or nuclear war since these could potentially leave enough breeding pairs for homo sapiens to continue (and eventually reach the longtermist goal of trillions of simulated happy humans).
S-Risk: This is like X-Risk but instead where either the AI or simulations of human beings are eternally tortured for eternity. Basically like hell but with more cryonics, simulations, and tech nerds. Will be applied to individuals who don’t worship the AI god. Maybe the strawberries will contain simulations of humans being tortured? Who can say
Bayes Points: These are imaginary internet points that rationalists use to determine whether someones world model (aka belief system) is accurate. Somebody should probably turn this into a cryptocurrency (and give me a 10% stake).
Epistemic: The number of times you can use this word per sentence is directly correlated to your IQ. Bonus IQ if you manage to use it more than 5 times in a paragraph.
Bayesian Statistics: The way the rationalists use Bayesian statistics, this simply means taking in new evidence to update your beliefs, because just saying that i incorporate new evidence and update my beliefs accordingly is clearly not rational™ enough.
Priors: These are the sub beliefs you hold that lead to your conclusions. Example use in a sentence “I’ve Updated my Priors as a result of epistemic reflection on new evidence regarding AI X-Risk due to the Waluigi Hypothesis”
Translation: I’ve changed my beliefs about when the AI God will kill us because of a blog article i read but don’t really understand that used epistemic many times.
FOOM: This is where the AI Strawberry Maximizer will recursively self improve it’s own code despite the law of physics and basic logic to become the AI Strawberry Maximizer God in order to fulfill it’s terminal goal of turning the world into delicious strawberry patches.
AGI: Refers to AI that can accomplish any task a human can (with the exception of all of the things that really matter like, you know, plumbing, carpentry, building)
Timeless Decision Theory (TLD): A branch of decision theory/game theory built for those who don’t like peer review, formal education, building on prior work, or any mainstream philosophy or mathematics. Invented by a self proclaimed “genius” that never graduated high-school, has no formal education and has never been published in an academic journal.
The Sequences: Much like and by the same author as Timeless Decision Theory but longer. Used to gatekeep people from criticizing AI X-Risk “Your arguments are invalidated by the sequences please read that”. The Sequences are said to be over 100,000 words, but that may just be the furthest anybody has gotten before dying of boredom.
Rokos Basilisk: A future AI Super intelligent God that due to the implication’s of Timeless Decision Theory (which it will obvious use because of reasons!) eternally tortures simulations of people who found out about the possibility of ASI but didn’t donate all of their time and money making it happen.
Rationalism: The belief that one can eliminate human cognitive biases and errors in favor of systemically rigorous mathematical thinking. In reality it just involves using complex jargon for simple concepts (see “Priors”) to make sentences sound smart.
Longtermism: A formal of radical utilitarianism that holds that we should be maximizing total utility in the universe (note that utility is rarely if ever formally defined) regardless of when that utility occurs. Practically this means focusing research and development efforts on making a Friendly AI as opposed to curing cancer, solving climate change, improving third-world living conditions etc. The longtermist dream is trillions of simulated sentient beings using Dyson spheres spreading out across the universe because that’s clearly a future EVERYONE would want.
Effective Altruism: The philosophy of maximizing happiness in the world through ensuring that gifts to charity are mathematically rigorous and defined. At first this involved creating a lot of mosquito nets at the expense of all other forms of charity. Has largely been hijacked by the AI X-Risk movement. Now involves using donation money to hire personal assistants so EA’s can focus their time on thinking really hard about the potential actions of a future super-intelligence.
The Sequences are around a million words.
Missing:
acausal
robot
god
So if the strawberry patches are happier and lead better lives than other organisms, that’s a net positive from an EA pov, right?
FOOM: also, the actual sound that an AI superintelligence makes as it ascends to godhood.
If you ever hear a loud “FFFOOOOOO-” coming from your server room then you should smash the “stop training” button on your wall as quickly as possible to stop the AI model before it escapes.
…which will eventually lead humanity to be capable of creating another superintelligence down the line, thus necessitating another nuclear war to prevent the next superintelligence from forming, and so on.
Eliezer Yudkowsky has, sadly, been published, at least in an academic conference. I haven’t read the paper, but the concept they introduced has actually been useful for me. He’s just the fourth author which might explain it.
And regarding x-risk and climate change, do you think there’s a non-negligible chance of climate change killing humanity? I’m worried about it like you, but I think Rationalists differ from us in their prioritisation, not in their analysis (in this very specific case).
Would it make sense to think rationalism is theoretically possible from a materialist point of view, but that rationalism is 100% impossible in this day in age with our current technology and science? And that anyone who says it’s currently feasible is a liar?
I laughed. See my thread on Jargon on SSC
Epistemological? What are you talking about??