r/SneerClub archives
newest
bestest
longest
89

I randomly ended up here after searching for ‘effective altruism’ since I didn’t know anything about it & was curious what it was. Now that I’ve browsed in this sub for about 10 min, I am completely confused. I still don’t really know what effective altruism is, but even more so, I am completely confused as to what this sub is about. Somehow I’ve been on reddit for about 10 years & never ended up in this subreddit. Hope that question doesn’t come off as rude.

It’s a subreddit dedicated to following the developments of and mocking the bad philosophy, science, and politics of a certain group of people who self-describe as rationalists. It is an offshoot of the badphilosophy subreddit, because too many posts were coming up about this specific group of people. As for who rationalists are, they’re something that stemmed off nu-atheism and reactionary politics, and is vaguely associated with Silicon Valley tech venture capital, who claim to analyse all human relations through the lens of rationality and scientific analysis, but end up just giving all their biases a scientific veneer.

I was so happy to have come across this subreddit yesterday. I've been extremely skeptical of this "rationalist" sophistry for a while and was increasingly worried I was the only one!
Welcome! The water is nice, hop on in!
Thanks for explaining, that does clear things up a lot.

This sub is for making fun of internet pseudo-intellectuals. The reason you found it when googling for “effective altruism” is because “effective altruism” is a buzz word used by many internet pseudo-intellectuals and that’s part of what we’re here to make fun of. If you are curious about effective altruism, this sub is probably not where you want to be to learn more about it.

To be fair, now that I've found out more about it from here & other sources, it is not what I thought it would be at all.
To be fair, effective altruism is something that's become much discussed by Rationalists but not really created or advanced or owned by them. There are academic philosophical points to debate about utilitarianism and consequentialism but those are independent of the whole Rationalsphere, which seems to have simultaneously subscribed to those views wholeheartedly but also misunderstood and selectively applied them. So if you go wandering the intertubes for EA stuff, much of it will be theirs, but you can go straight to the sources like Singer and GiveWell and the difference should be obvious. Broadly speaking: if you're looking at charities that try to figure out how to do the most measurable good per dollar right now, that's probably a decent thing to do (but consider the limits of what's measurable); however, if they start talking about allocating that dollar to think tanks that sit around imagining solving problems that don't exist yet (existential risks), you're probably in charlatan land.
Yeah tbh I was just trying to figure out what a good way to donate to charities was, since you always hear about different charities turning out to be scams. I didn't realize there was an entire philosophical debate & discussion around these nuances related to giving, let alone a subreddit devoted to 'sneering' at certain outlooks, lol. I did end up finding charity navigator, which is more what I was actually looking for. This was an interesting rabbit hole to briefly saunter down though. I feel like I stepped into a cyberpunk novel or something.
Can you explain why you think evaluating existential risks is charlatanry, or at least link me to somewhere that does? I’m very interested in EA and existential risk to the point where I’m considering going into AI or animal welfare research BUT I’m also very skeptical of the whole Rationalsphere so I’d love to hear your thoughts. Thx. EDIT: I’ve been doing some digging and it seems many here don’t necessarily think core EA ideas are bad per se but there’s a lot of associated fringe weirdo ultra pro-capitalist Silicon Valley cult bro douche bags who throw the label around to legitimise their ideas and justify funnelling funding into their own projects. It’s given me pause for thought - like I said, I’m very interested in both animal welfare advocacy AND the future of AI - but I want to be careful who I associate with and what kind of career I build so if you have any more info I’d really appreciate it.
Existential risks can't really be adequately quantified, and it's often not clear what we would need to do to adequately defend against them. This makes any charity dedicated to stopping them suspect. Let me give you a few examples. Rationalists will say that we need to devote enormous money to creating "friendly AI" because if we don't, someone may create an unfriendly AI and it will go full Skynet on us. I'm not convinced that this is even possible--we don't know how intelligence works, and it may be that, [as Maciej Cegłowski points out](https://idlewords.com/talks/superintelligence.htm), human intelligence is the result of trade-offs rather than a single figure that can be maximized, or that an AI's superior intelligence wouldn't matter much against sheer human numbers and logistics, or that they'd have complex motivations that mean converting the entire world into paperclips wouldn't be something they'd desire, or that an AI wouldn't be able to recursively self-improve itself, or that an AI wouldn't have access--even if it entered the Internet--to all of the manpower and materials it would need to wipe out the human species. Now, obviously rationalists disagree with my assessment. But they have as little (or even less) evidence for their assertions as I do. Their numbers for, "This is how likely Skynet is to kill us," are made up. They're plugging in values that *feel* right; they have no way to derive them empirically. And because they're utilitarians, this matters a lot. It may well be that creation of killer AGI is so unlikely, or they're so unlikely to succeed, that everyone should be entering ecology and trying to avoid the sixth mass extinction, or devoting themselves to anti-nuclear advocacy. My second example? Nick Bostrom believes in creating a global surveillance in order to invent what he calls "black-ball technologies"--which is to say, technologies that would be existential risks. One of his proposals was to force everyone to wear cameras and microphones with GPS transmitters all the time. Suspicious activity would result in an explanation being demanded of the wearer, and if the answer wasn't good enough, the police would swoop in. Of course, a system with that kind of authoritarian control over people will inevitably favor totalitarian rulers, who may well wield "black-ball technologies" regardless of the effects on the lower classes so long as they can assure their own survival. With such absolute control, organizing against them would be impossible. So Bostrom's solution may make the problem *worse*, not better. I'd suggesting focusing on things that we know are issues now. Leave extreme hypotheticals--gray goo scenarios, Skynet, the like--to science fiction writers. We're nowhere near being able to create any of those things, even if they're possible.
Nice. Lots of brilliant points! And I think the concerns you raise are totally legitimate. I'm personally not certain superintelligent AI is even technically feasible let alone conscious AI, but that being said, I still think the possible risks warrant *some* level of attention and research. But it's without a doubt far too skewed in that direction, and when I think of other problem areas like pandemic preparedness, nuclear policy, global health, food insecurity, wealth inequality, climate change and animal welfare, I believe they deserve as much if not significantly more attention than AI. I'm not embedded in the community - only planning a career in that direction - but I believe there are plenty of people in there that are not myopically focussed on AI who are also concerned in takes up too much attention and funding. Given my limited skills, AI safety research appeals to me and I was certainly making moves to work in AI (and I still might do) but I do think it's way over blown and mostly a pet project of billionaires with saviour complexes. Maybe I should focus on something else? It's a shame because I felt I'd made great strides in figuring out what I want to do lol. Maybe I can use AI to get into a global health or animal welfare career. As for staunch utilitarians, it certainly seems that's the most prevalent mode of thinking within the EA community but I think there's some shift towards other ethical frameworks which I believe is necessary and totally warranted. Utilitarianism, imo, is a useful mode of thinking in *some very limited* scenarios, and EA would benefit greatly from exploring other ethical frameworks like an ethic of compassion and justice, deontic theories, etc. and introducing more left-wing thought. As for Bostrom, did he mean that's something that *could* happen or something he believes *should* happen, ideally? If it's the latter, that's laughably bonkers. Either way, he's a philosopher so I tend to take what they say with a hefty grain of salt. Take Singer for example. I'm amazed at what he has done for the animal rights movement and promoting "giving" in general, but his die-hard utilitarianist stances has led him to say some outright bonkers and repugnant things. Thanks for the reply it's given a different perspective and plenty to think about.
> Given my limited skills, AI safety research appeals to me and I was certainly making moves to work in AI (and I still might do) but I do think it's way over blown and mostly a pet project of billionaires with saviour complexes. Maybe I should focus on something else? Well, what are your skills, if you don't mind sharing? > As for Bostrom, did he mean that's something that could happen or something he believes should happen, ideally? If it's the latter, that's laughably bonkers. He doesn't [seem to consider it inherently desirable](https://aeon.co/essays/none-of-our-technologies-has-managed-to-destroy-humanity-yet), but he seems to view it as the least worst option, and seems to think there's just reason not to wait until we know a "black-ball technology" is an existential risk (given that such a system would take time to set up).
>Well, what are your skills, if you don't mind sharing? Engineering with some coding skills. I didn't want to go into engineering so I'm currently applying for Master's programs in machine learning, data science or cognitive science (all of which have a strong AI component). I'm still undecided whether I'll pursue research (the prospect of doing research in cognitive psychiatry appeals to me) or work for an "EA" org (I'm using that term loosely) after my Master's. I thought I'd only be in a good position to work in AI rsearch but I'm starting to realize all of these study paths could still be relevant for a career in global health, poverty or animal welfare?
Yeah. Data science is very important in a lot of things--it sounds like you could go into anything involving large sets of data and complex models and do well for yourself, given that path of study. That includes ecology, meteorology, global health, and more besides.
Exactly. Thanks. I was having a bit of a panic after hearing others' perspective on AI research which has taken up a lot of my mental focus over the last few months but not all is lost. Like a lot of people, I want my career to actually benefit others, and I've realized that can be way more easily achieved in an area adressing more immediate problems. I'll continue down this route. Thanks again.
[deleted]
That sort of thing always makes me wonder if they realize that not everything is WiFi enabled.
>I'm not convinced that this is even possible--we don't know how intelligence works, and it may be that, as Maciej Cegłowski points out, human intelligence is the result of trade-offs rather than a single figure that can be maximized, or that an AI's superior intelligence wouldn't matter much against sheer human numbers and logistics, or that they'd have complex motivations that mean converting the entire world into paperclips wouldn't be something they'd desire, or that an AI wouldn't be able to recursively self-improve itself, or that an AI wouldn't have access--even if it entered the Internet--to all of the manpower and materials it would need to wipe out the human species. Now, obviously rationalists disagree with my assessment. ​ Yes, most rationalists are unsure but lean in the opposite direction (that seeing the advances and rate at which AI is improving, there's a higher chance of catastrophic AI than non-catastrophic AI.) However, it's totally reasonable to believe the opposite as you do. But this is missing the point entirely. The argument is that no one is actually studying these questions ("is intelligence the result of tradeoffs?" "what would be an AI's motivations?") so of course we don't know the answers. While I agree that a lot of their funding ideas are self-serving, there are legitimate reasons to want to dedicate funding to actual experts concerned with AI safety. It's why a lot of rationalists supported OpenAI until they basically became Microsoft-owned.
I don’t think anyone should be totally unconcerned with EA stuff, and everybody has to have something that motivates them, whether it’s wordworking or averting the end of all life on Earth itself. I still often shudder a little bit thinking of the Yellowstone Caldera for example. But when even supposedly respectable people like Nick Bostrom are quite obviously cashing in on the Davos after-dinner speaking gig circuit to push their hobby horse within that realm it’s more than infuriating to be told that AI is the big picture issue by people who think that climate change or wealth inequality is a small potatoes problem you can solve with a bit of technology growth here and a bit of Coase theorem there.
Thanks for your reply. I know some lovely, caring people that work in what they’d describe as EA or EA adjacent fields (usual animal rights advocacy) so I was quite surprised to stumble across a lot of disdain on this subreddit. I probs missed it because I only come hear occasionally as I hate the preponderance of pseudo-intellectuals talking on things they know nothing about so it was refreshing to see the hilarious (and left-wing!) takedowns and mockery in this sub. But I didn’t realize there was this whole other side to EA. Shame. I still think my career is moving in that direction however. The careers advice helped me get motivated to move my career in a direction that followed things I’m passionate about and I felt matched my skills when I was at a real loss of what to do with me life. I’ll just be sure to be wary of those who use the EA label who may have some shit takes and very unsavoury ideas. I always saw it as a tool anyway and definitely not an ideology that should be adopted. The ultra-rationalist side of things has always rubbed me the wrong way anyway for obvious reasons.
If you take EA as EA, and especially in its real-world not-exclusively-on-nerdy-messageboards incarnations you shouldn’t have anything to worry about, or at least no more to worry about than in any other public-spirited enterprise, whether that’s charity or working for a pharmaceuticals developer or whatever. I certainly wasn’t exposed to the rationalist stuff when I was originally introduced to EA in spite of being in close quarters with people who took it very seriously - often as you mention for animal rights reasons.
I think the better side of EA suffers from the movement being increasingly more focused on artificial intelligence, and there are people in it who don’t agree with that shift. [Dylan Matthews](https://www.vox.com/2015/8/10/9124145/effective-altruism-global-ai) for one, although he does let them publish dumb stuff on Vox sometimes. If you’re part of the EA community and NOT obsessed with AI, you could help push them in a better direction.
I totally agree and that has been a concern of mine. I'm personally not certain superintelligent AI is even technically feasible let alone conscious AI, but that being said, I still think the possible risks warrant *some* level of attention and research. But it's without a doubt far too skewed in that direction, and when I think of other problem areas like pandemic preparedness, nuclear policy, global health, food insecurity, wealth inequality, climate change and animal welfare, I believe they deserve as much if not more attention than future of AI. I'm not embedded in the community - only planning a career in that direction - but I believe there are plenty of people in there that are not myopically focussed on AI who are also concerned in takes up too much attention and funding. Given my limited skills, AI research appeals to me and I was certainly making moves to work in AI (and I still might do) but I do think it's way over blown and a pet project of billionaires with saviour complexes. Thanks for the reply and article link - it's given a better perspective and plenty to think about.
try this article here https://rationalwiki.org/wiki/Effective_altruism it is neither altruistic, nor effective
Thanks, I'll give it a read.

we are jocks who beat up nerds

mkay
That is sort of an injoke. But also a complaint from Rationalists about people who disagree with them, that they are the jocks going after them the poor nerds. The idea of the jock/nerd split comes up a lot (same with basically dividing people in 2 camps (low decouples vs high decouplers, ibject vs meta level, scouts vs ... dunno forgot, hedgehogs and foxes (that group is more Rationalist adjacent however)). With their more meta way always being superior of course. All of that is referenced here in a quick joke. Most people here are also what you would call nerds irl.
Also the whole jock/nerd split seems to be a dying remnant of 80s/90s media where tech and nerd culture was deeply inaccessible. Nowadays it doesn't really make much sense, nerd culture is not only mainstream but deeply embedded in all culture. Dragonball is as much an expression for ethnic minority culture as it is for anime weeb culture, Travis Scott had a massive collab concert inside Fortnite, Kanye West had a time where his entire style was "Akira but black", and MF Doom was just trying his best to be a real life comic book villain. So whenever I hear people unironically complain about being nerds in a world of jocks, I just think that's people who have been emotionally stuck for the last decades.
[Yep, Superman plays warhammer40k.](https://youtu.be/mAzMQGZ95xU) It just sucks that modern culture is reinventing the same whole bullshit with the chad stuff.
Nerds have been swept away by the trans tsunami.
There's so many sub communities of the internet that I just did not realize existed.
These are more common ideas held within the Rationalist sphere tbh, apart from the hedgehog fox one, that is a ribbonfarm thing (which they took from somewhere else iirc). Ribbonfarm is more self aware than most about this as they usually upgrade the binary idea to a 2x2 grid. (And are aware how much of a simplification this is). Not even exactly sure how much you could call this communities vs blogs vs single authors.
You might as well be speaking in Russian, lol.
Sorry yeah, this is all just various blogs and ideas of various blogs. [Ribbonfarm](https://www.ribbonfarm.com/) is a blog, mostly insight porn. And here we more talk about the [lesswrong](https://rationalwiki.org/wiki/LessWrong) blogs/writers/offshoots (slatestarcodex for example was a blog about stuff like that (because people were mean to Scott Alexander he moved to substacks, I forgot the name of that. And slatestarcodex is responsible for the neonazi training center which is r/themotte))). It is all a lot of insider nerd bullshit. And I myself have no idea where ['the last psychiatrist'](https://thelastpsychiatrist.com/) (another blog) fits in all of this. But that blog is long dead and most people have forgotten about it. (also, it was a lot of insight porn where the only insight was 'narcissism') And yes, I know this doesn't help at all, dont worry my head also hurts.
Funny you mention TLP. I stumbled in there a while back and the devotees reminded me of rationalists
> And yes, I know this doesn't help at all, dont worry my head also hurts. Yeah man, this subreddit should be sponsored by Ibuprofin.
A lot of the jargon used in ‘rationalist’ spaces on the internet is almost intentionally as obtuse and difficult as possible to follow for outsiders. Kinda the same thing as when I used to bullshit about the Star Wars expanded universe books with my friends, except this terminology is used to justify awful shit and try to post the hottest possible take at all times
I consider myself a prep, personally
Oh, that Scott Alexander! The sportos, the motorheads, geeks, sluts, bloods, wasteoids, dweebies, dickheads, they all adore him.
Yeah, it's hilarious because in my case at least, I'm a computer science guy and much of my disgust for Yudkowsky and his ilk comes from their butchering of my field.
The time of shoving nerds in to lockers has passed, and with it has passed the hope of a better tomorrow.
accurate

I was in a similar boat as you. This sub was randomly recommended to me, and I thought the posts were interesting, so I subbed. I don’t tend to involve myself in discussions too often, but I enjoy reading them from time to time. I’m not knowledgeable enough to always follow along.

> I'm not knowledgeable enough to always follow along. This probably means you are a healthy and well-adjusted individual.
9/10 psychologists agree that staying out of the SSC meta is good for mental health
is the 10th psychologist Scott Alexander himself

Please read the sticky post, which should have been the first thing that appeared on the subreddit when you arrived here.

You might get less questions like this if you put a brief description of what this sub is on the sidebar. The response I got to my question from u/jaherafi explained it very well, maybe paraphrase what they said for the sidebar if you want less posts like mine.
> a brief description of what this sub is on the sidebar. But that is no fun. Consider figuring out how the sidebar applies to here is sort of a right of passage.
I don't know. There's a thin line between "right of passage" and "inside joke" (not saying that's what the sidebar is, just how it may be perceived), and personally, I would prefer a useful sidebar that may attract more people to join us in dunking on rationalists. However that risks the rationalists catching on and brigading this sub so maybe not a good idea.
> that risks the rationalists catching on and brigading this sub so maybe not a good idea They’ve known we’re here for years now, for a long time it was hard to go hours or days without somebody on one of their boards waxing conspiratorial about how this subreddit is a den of genuine, hardcore, malicious evil that was literally the heart of a blue tribe cabal out to ruin people’s lives for sport. The “brigading” was pretty intense too.
> attract more people to join us What, are you trying to *grow the movement* or something?
Eight lives saved per post made.
Clearly we should just add a [link to rationalwiki on the end](https://rationalwiki.org/wiki/LessWrong#Criticism) that should make it clear wtf is going on.
The person you’re talking to is not responsible for the sidebar, which the people who are responsible for already like as it is
Same thing that happened to you happens to me every now and then and I just visited this sub again today after a couple journeys here last year. Best straight up definition I can give (sorry if offends subscribers but I don’t owe you guys anything) is that this sub is just a circlejerk. No posts offer good content, no discussion increases my understanding whatever the topic and comments are often dismissive towards key scholars and it all happens under the premise they’re against “rationalists” but I have never seen a discussion with anything beyond anecdotes, bitter comments often ad hominem in shape and form and surely don’t remember reading any “empirical” data to support the “eloquent” comments. Slatestarcodex, lesswrong, rationalwiki, IDW, and some other couple “intelectual” websites/blogs/subs are often talked down and religion also comes as a topic just to reinforce the stroke force applied on the mutual jerk and after skimming several posts same as you I still don’t have no idea if the subscribers here think anyone is right. Maybe Berlinski? Who knows!? Please don’t stalk me guys, I just happen to pass by, not sure why, and I’ll politely see myself out now, sorry for being blunt I mean no offense. Be safe, enjoy the circlejerk, best regards.
"Key scholars" of what exactly? Yudkowsky isn't a scholar by any stretch of the imagination.
And I'm not buying into stupid shit like cryonics cause Ray Kurzweil said so.
Hey, if you ever feel like you need to take a load off and just have some less heavy lighthearted fun about people who take themselves way to seriously you are free to join in on the circlejerk. It is funny that you include rationalwiki in that list however, does make me doubt how much attention you really paid. Dont think anybody here talked down about rationalwiki. E: > if the subscribers here think anyone is right. Well that is a good example of you missing the point about a lot of things. First, it isn't about 'who is right' but about what you are right about. Scott is right about computer science for example, but just not about his weird opinions about sneerclub/feminism/the plight of the nerds/him being the worst victim in all this (he is not wrong about him also being a victim however (of cult indoctrination/the patriarchy/and yes, some sneerclub harassment) and an agressor (blog name, and weird focus on sneerclub as the most evil)) etc. Even Yud is right about a lot of things (death sucks, we don't express it here enough, but really my condolences Yud, and on that AGI alignment is hard (but it is the same category of existential risk problems as ensuring that all nations have an equal and fair access to the space elevator or we will start another world war, is hard (but I hear you say, we can't build a space elevator, and nobody is working on building one. Exactly))). Even Yud, in a rare moment of epistemic humility, understood it was not about being right, but about being less wrong (which makes his disciples spinning off and creating moreright pretty funny). Of course, this all was in the start, and it how has failed and it is taking on cult qualities. Second, we aren't really a like-minded collective who all agree with each other on everything. So we cant tell you 'who' is right, esp as it is also more about who is wrong, and what subjects they are wrong/right about as above. Third, marxbro, marxbro is right. (Unless it is about IQ then the answer is Stephen Jay Gould (and no, I will not read his work)).
Reading this was a bit too much like parsing Lisp code.
Sorry about that. It prob could do with a few dozen editing passes. But prff effort.
> No posts offer good content Neither agreeing with or disputing this, but there are greater things in life than content for you to consume. > and comments are often dismissive towards key scholars Cap. There are no key scholars of which anyone here is dismissing. Scott Aaronson is a top tier scholar, but no one here dismisses his in-field expertise--rather, people recognize that being a genius at A doesn't mean you necessarily know much about (totally unrealted field) B. > but I have never seen a discussion with anything beyond anecdotes, bitter comments often ad hominem in shape and form and surely don’t remember reading any “empirical” data to support the “eloquent” comments While the sub is not designed to accommodate this type of criticism it does still happen from time to time if you pay attention. On the whole, however, most people here aren't keen to waste time arguing something obviously moronic with people who should know better. > rationalwiki Yes, notable rationalwiki hater dgerard is here constantly bashing it. Shameful tbh. > I still don’t have no idea if the subscribers here think anyone is right. Maybe Berlinski? Who knows!? What makes you think the subscribers here share any meaningful mutual belief? > No posts offer good content, no discussion increases my understanding whatever the topic and comments are often dismissive towards key scholars and it all happens under the premise they’re against “rationalists” but I have never seen a discussion with anything beyond anecdotes, bitter comments often ad hominem in shape and form and surely don’t remember reading any “empirical” data to support the “eloquent” comments. This describes your post.
> Scott Aaronson is a top tier scholar, but no one here dismisses his in-field expertise--rather, people recognize that being a genius at A doesn't mean you necessarily know much about (totally unrealted field) B. As [a physicist who currently specializes in quantum information theory](https://scirate.com/blake-stacey/papers), I have as much reason to respect his genuine technical achievements as anyone, and probably more than most. > While the sub is not designed to accommodate this type of criticism it does still happen from time to time if you pay attention. Just picking personal favorites, I recall [this thread](https://www.reddit.com/r/SneerClub/comments/po1yqv/short_of_content_lesswrong_pilots_500_payments/hd5winu/?context=3) and [this one](https://www.reddit.com/r/SneerClub/comments/m50kuz/why_many_worlds/) regarding physics, [this](https://www.reddit.com/r/SneerClub/comments/pwctrg/scooter_pontificates_on_the_decline_of_modern_art/) on literature and architecture, and [this](https://www.reddit.com/r/SneerClub/comments/pfc43o/training_manipulating_a_person_like_you_would_a/) on the proper techniques for cooking asparagus.
Berlinski? [lolz](https://zenoferox.blogspot.com/2008/04/ma-vie-en-prose.html) > No posts offer good content hey, I link to [the *Square One TV* song about Archimedes](https://www.youtube.com/watch?v=Bz6VtQJ6-kA) every chance I get
I have posted plenty of insightful comments here.
> sorry if offends subscribers but I don’t owe you guys anything Nor were you asked to
IMHO, bystanders might offer a different perspective than insiders, I replied only to OP, not the whole 10 other comments I got from subscribers (with reasonable comments!,) my honest 0.02 but I mean no harm, carry on.
I'm a total newbie (more "bystander" lol than insider), but what got me here was people recommending HPMOR one too many times. That's it, that's my villain origin story. Ratfic radicalized me against the cause.
Still gotta read this one. What will it do to my brain?
It won't do anything to your brain except make you aware of the absolute contempt most rationalists have for art and culture.
Haha I’ll put it on my stack, but I’m more of a Bostrom guy than a Yudkowsky fan
It wasn't a recommendation.
That's the point haha
You say you're more into Bostrom, but "The Fable of the Dragon-Tyrant" has the same issues. The dragon tyrant operates trains that run on time and stuffs people into cattle cars because larding on the Nazi iconography really drives home the message of "death = bad." Meanwhile, he's quick to attribute things like natalism and overpopulation to the biological fact of ageing and dying, presumably so the reader doesn't stop to wonder how population numbers would be managed if this kind of life-extending tech was widely available... though who'd be getting access to it (probably not the kid who wants his grandma back) is an open question. Also, how do you get rid of an immortal tyrant? Anyway, when I die I'd like to buried in a mushroom spore-infused suit that cleans my decomposing body of toxins so they don't leak into the surrounding environment. And if that makes me a death-cultist rather than a person who cares more about climate change than life extension, so be it.
You raise some interesting points but coo-opt some flawed analogies. Let me be clear that and not a technophobe and for what is worth I recognize modern medicine is already good at promoting dysgenics, but we don't talk about this, and the pandemic already address this, in a way. >"The Fable of the Dragon-Tyrant" has the same issues 0) What issues? You're relating this to what? 1) Not sure Bostrom use of trains in a nazi semiotic fashion, unsure why you'd make a correlation that isn't there. 2) Natalism was* a policy after WWII, not anymore, overpopulation is not an issue nowadays and iirc ~10~11b was common sense pre-pandemic estimate ceiling but we are seeing a reverse trend on projection numbers and steady/modest progressions are recently [being attributed](https://www.ons.gov.uk/peoplepopulationandcommunity/populationandmigration/populationprojections/bulletins/nationalpopulationprojections/2020basedinterim) to immigration not [2-3 child policies](https://en.wikipedia.org/wiki/Two-child_policy) such as in China, but overpopulation is a scarcity issue, not a dysgenic issue. 3) Regarding access to life-extending tech, we agree, open question and a philosophical/economical one, not sure it equates with Bostrom arguments on research investment (the point of the tale, at least for me). >how do you get rid of an immortal tyrant? 4) We don't. See Asimov Foundation. I really don't dig philosophical questions, but regarding technologies I'm more prone to be utilitarian/teleological than deontological *for most cases*. Not a deterministic rule but I'm a curious being. >if that makes me a death-cultist rather than a person who cares more about climate change than life extension It don't. Good for you and the planet, kudos.
Sorry if it wasn't clear, but I meant that Bostrom's story has the same issues as HPMOR, despite obvious differences in subject, tone, length, etc. As part of promoting the author's stance against death and for cryonics and similar tech, HPMOR changes several key aspects of JK Rowling's canon in order to poke holes in her world-building (which IMO is unnecessary since there's plenty of holes to choose from in the actual canon). One instance is Dementors going from JKR's metaphor for depression to stand-ins for death, which allows Harry to discover a way to defeat them through sheer will power, by imagining a glorious transhumanist future in which death is overcome, etc. (Once he does this, he manifests some kind of "super" Patronus that takes the form of a human being rather than an animal. In canon, the Patronus is something like a totemic animal protector, so to have a "perfected" version take human form implies Y. sees us as the pinnacle of evolution when evolution has no teleological endpoint. But that's a whole other issue that I won't get into.) Anyway, I assumed you'd know I was talking about HPMOR since that's what the thread is about. Regarding the second point, I think Bostrom does use the image of trains filled with people to remind the reader of the Nazis and the machinery of their death camps, exploiting the emotional associations (because Nazism is cultural shorthand for evil). For instance, he writes: "Every twenty minutes, a train would arrive at the mountain terminal crammed with people, and would return empty." A bit earlier, he writes: "What occupied the king's mind more than the deaths and the dragon itself was the logistics of collecting and transporting so many people to the mountain every day." The Nazis are known for using euphemisms like the "final solution" and focusing on the technical management of extermination, so the language here doesn't seem to be neutral. Now, there's nothing wrong with making an analogy with the Holocaust. Lots of writers do it all the time for various reasons. But my objection to how Bostrom does it is the heavy-handedness. He talks about a natural biological process as if it were an inherent evil that no one would fail to condemn if presented them with the right arguments. I'll get to your other points later as I'm pressed for time now, but I hope this clarifies something about my position. FYI I've upvoted all your comments as I appreciate open and friendly dialogue!
> Bostrom's story has the same issues as HPMOR I believe I said I haven't read HPMOR yet, so yeah, it whooshed me. >by imagining a glorious transhumanist future in which death is overcome, etc. (Once he does this, he manifests some kind of "super" Patronus that takes the form of a human being rather than an animal Oh. I see the relevance now. >so the language here doesn't seem to be neutral Arguable to say the least but [I've seen (pdf)](https://www.nickbostrom.com/papers/vulnerable.pdf) Bostrom drawing parallels with nazi equivalences before, so I can't dismiss your point indeed. >there's nothing wrong with making an analogy with the Holocaust. Lots of writers do it all the time for various reasons Intellectually cheap, but valid, I must admit. >He talks about a natural biological process as if it were an inherent evil that no one would fail to condemn if presented them with the right arguments. It isn't evil as perhaps he gives the impression there (again, I believe that essay's core argument is about research funding,) but I do agree this topic is on the radar of several research initiatives and regardless of merit or even the nature of it, it is a pursuable objective and one that I argue as unavoidable since I reckon humans won't stop creating gizmos until they reach god like status because this is the ultimate objective of a son, to be *better* than their parents (whoever our *parents* are in this particular case haha). >FYI I've upvoted all your comments as I appreciate open and friendly dialogue! I appreciate the gesture but don't worry, karma isn't a currency for me, but the feeling is mutual.
> No posts offer good content, no discussion increases my understanding whatever the topic Looking for effort posts on a shit posting sub will do that to ya
[deleted]
Don’t tell me what to do.
[deleted]
We’ll have to disagree on this cowboy. You should know better: wild horses run faster!!

This is how I feel about /r/sorceryofthespectacle

I joined Slatestarcodex and Reddit recommended that I should join this one as well which got to be some kind of ironic suggestion as I gather they are competing philosophies. The more the merrier, as long as the discussions are at least halfway intelligent.

It is brave/foolhardy to post so soon after discovering a new sub, especially one so much of an “ingroup”.

Reddit makes suggestions based on tags. Probably both subs have a "Philosophy but weird" tag. That's why communists have neoliberals sub suggestions and vice-versa.

I suggest you read the sticky. Specifically of the old posts which are linked in there near the top.

There’s a significant number of “sneerers” here who initially liked a lot of what was posted on Slate Star Codex but became disillusioned with that community because… reasons.

This sub makes fun of Scott Alexander