r/SneerClub archives
newest
bestest
longest
Longtermists consider who should eat cake (https://existentialcomics.com/comic/485)
96

If you really want to help the long term future of humanity, you should probably just become a communist like a normal person.

Based and superstructure-pilled
A normal person would become an anarchist, you bootlicker.

The tears in MacAskill’s eyes are a nice touch at the end haha.

sls heh

Spongecake Launch System!

Existential Comics is great, as usual.

Maybe this is just affection for MacAskill, but I think he’s overall pretty good about incorporating the really high amount of uncertainty that there’s necessarily going to be wrt predicting the far future into his view. His actual prescriptions for things that we should focus on are stuff like pandemic readiness and global poverty, which is pretty far from the more pie-in-the-sky-uncountable-simulated-beings that some like to focus on.

As a hot take (if I were a rationalist, I would do that weird “Epistemic status: low” business): A lot of the complaints about EA from lefty sorts come from the same psychological place as complaints about veganism. It’s plainly immoral to be living such a hugely abundant life while people die of preventable diseases, but this realization would actually require people to give up something tangible. People come up with all manner of goofy rationalizations about how the people giving up the majority of their wealth to people who need it more are actually bad. It’s no different (psychologically, obviously the argument is different) from “well a mouse might get killed by a wheat threshing machine, so who can really say whether I should stop participating in factory farming”? Academic and academy-adjacent leftists have become totally coopted by neoliberalism, and can’t handle seeing people actually having the courage of their convictions, and sacrificing for the good. EA is cool and good, the fact that there are cringe elements is true of every movement, and not a good argument against it.

I think it is quite a stretch to compare criticism of vegans with criticism of people who buy a castle.
MacAskill can live a comfortable life spending only a fraction of the money he spends on himself. And longtermism is a silly idea, even if utilitarianism is valid because predicting happiness in the far future is pure speculation. If MacAskill wants space colonization, he should focus on issues that will affect people currently alive, and the next several generations so *that* society could maybe afford to found space colonies. (Disclaimer: this is almost a bad-faith argument because large sustainable space colonies are nothing more than sci fi wish fulfillment, IMO. What psychological effects would be caused by locking a group of people into a smallish structure on the Moon, Mars, or an asteroid? (a realistic "space colony")
I think a lot of my problems with the longtermists is that they openly base their hypothetical trillions-of-simulants future on post singularity technologies. Even ignoring that the rapture of the nerds is a questionably-scientific hypothesis in the first place, the whole point is that beyond the singularity change is happening so quickly we can't predict what will happen with any meaningful degree of accuracy. Like the center of a black hole where a finite mass in an infinitesimal space breaks our current ability to do physics, it breaks our ability to speculate about the future. But no, despite accepting this premise without serious question, let's instead as d assume that speculation on the other side of the singularity is not only possible but that improving and/or hastening those conditions is the *only* meaningful human activity.
> If MacAskill wants space colonization, he should focus on issues that will affect people currently alive, and the next several generations so that society could maybe afford to found space colonies. That’s straightforwardly what he does.
MacAskill is a conman devoted to making the evil ideology of longtermism look palatable to the public. He even dishonestly downplays climate change's potential to end civilization by misrepresenting the views of actual experts. We are at 1.2 C of warming and impacts are happening decades ahead of schedule despite the amount of average warming being consistent with predictions. MacAskill expects us to believe that **15 C** of warming won't be an existential risk (I mean it in the sense of a threat to the existence of civilization, not the kooky longtermist definition) because crops could theorectically be grown in some areas.
What’s the con? Like is he secretly living it up in a mansion or something? In any event, I agree that he may be wrong in the merits re climate change, but I’m not sure that establishes dishonesty. I thought he was pretty thoroughly vindicated in his last spat with Torres accusing him of misrepresenting experts (the expert came and said that while he disagreed with MacAskill’s normative conclusion, that was outside his actual technical expertise). Was there something else that he got caught misrepresenting?
Yes and yes, for the first two. https://www.truthdig.com/articles/the-grift-brothers/ (It's easy to pretend to be altruistic when it costs one *nothing*) And he brushed 15 C of warming as no big deal, which is not what climate scientists say and is contradicted by paleoclimate data showing that 15 C of anthropogenic global warming would turn even high latitude regions paratropical in a geological instant. You can find this information by reading a used historical geology textbook, or following ScienceDaily articles on climate (they are summaries of new research papers). Here is an example, from when I checked: https://www.sciencedaily.com/releases/2023/02/230217120546.htm This is easily availiable knowledge. You might as well make excuses for Yudkowsky's bungling of middle school level biology in HPMOR.
The article is spot on about the criticism of "earn to give" and how that shreds any integrity that EA might claim. But it kind of misses the financial mark on macaskill. It treats the marketing budgets (backed by other EA grifters) of his project as if they were his personal income. That doesn't really say anything about whether or not he makes any personal sacrifices.
Even if MacAskill truly believes in longtermist EA, he is misrepresenting facts about climate science (even if he believes his own claims about climate change not being a serious existenial risk) and is the face of a movement filled with grifters.
Not gonna argue with that. That type of climate view, or anything close to it, should be a huge red flag for anyone looking at EA or "longtermism". (I put that in quotes, because I resent the fact that what should be an umbrella term really refers to a dumb and narrow viewpoint)
> Yes and yes, for the first two. > https://www.truthdig.com/articles/the-grift-brothers/ > (It's easy to pretend to be altruistic when it costs one nothing) The linked article says nothing about MacAskill’s personal income or lifestyle? Just that he spent money on PR. PR is not a mansion, or really ‘living it up’ in any meaningful sense. Re climate change: it’s not clear that MacAskill was disputing the climatology or geology. His point is that civilization, in some capacity, could still survive. Is there a scientific consensus against that?
>Re climate change: it’s not clear that MacAskill was disputing the climatology or geology. His point is that civilization, in some capacity, could still survive. Is there a scientific consensus against that? There's definitely worry that substantially less than 15C will end civilization; not in and of itself, but through cascading failures. But the real problem is the attitude behind this -- "xrisk is everything" which is how the grifters as a whole get to spend their time and donors' cash circle jerking about AI.
Sure, I get worrying, it seems to me like 15C would totally fuck us. I can’t see that not existentially threatening a few countries with nukes. But the accusation was that he was misrepresentating an expert, not just that he has a dumb take.
Vegan orgs waste money on stupid shit all the time. You’re demanding a level of organizational efficiency and discipline that doesn’t exist in any movement. At that point, you may as well just be against social movements in general.
My dude there are multiple levels of frivolousness and organizational inefficiency that still do not yet approach buying a MOTHERFUCKING CASTLE. Are you kidding me? And believe me if some animal rights org did that I'd make fun of them too. At least if they dressed up kittens and puppies like medieval lords and took cute pictures of them that would still be infinitely more useful than hobnobbing with bloodsucking ghouls who pitch in pennies to solve 10% of problems they create. And I don't even have to criticize all _that_ because the castle is just so fucking ridiculous it's the low hanging fruit that deserves to be mercilessly sneered at forever. I call this Effective Sneertruism.
> that still do not yet approach buying a MOTHERFUCKING CASTLE. Are you kidding me? I’m not sure what’s special about buying a castle. Usually people in the industry just look at % wasted, not what it’s wasted on. Is there some critique of that model that you’re basing this on, or just a hot take?
Look, if you don't already see what's ludicrous about an organization purportedly dedicated to numbercrunching the optimal way to spend every single dollar for the benefit of mankind (or whatever) buying a goddamn fucking castle then I'm not qualified to explain anything to you.
This sub has such a goofy relationship with expertise and credentials. Like, when your hot take is contra expert judgement, it’s just obvious. When it’s in line with expertise, the rationalists are totes antiintellectuals for disagreeing. Pick a lane.
you may be on entirely the wrong subreddit how did you get here anyway
I'm here to sneer, but people keep spreading misinformation. I think sneers should be honest!
honestly, your posting is extremely bad and dumb and you should reconsider your approach
What different approach? People say something wrong, I say I think it’s wrong!
yeah, [this thread](https://forum.effectivealtruism.org/posts/xof7iFB3uh8Kc53bG/why-did-cea-buy-wytham-abbey?commentId=u3yJfbm2pes8TFpYX) explains their thought-process, which seems reasonable enough to me. tl;dr, saves costs on hotels, renting places for conferences, which is v. expensive in oxford. Also it's not strictly speaking a castle. Also, it isn't a sunk cost, they retain the ability to sell the property whenever, so it could be thought of as an investment
[deleted]
> justification for buying a giant mansion in the countryside, come on. I've seen *eyes wide shut* and *ninth gate*, I know what giant mansions are for
sitting and thinking Great Thoughts about the forthcoming robot apocalypse
I think the central leftist critique of effective altruism is more on-point and pretty much the same as a critique of veganism (or maybe, as a better example, of recycling): it focuses on minuscule individual behavior changes while ignoring (or even accepting) the whole broken system out there causing the problem, and the substitution of the mild inconsequential focus could actually stunt progress solving the real problem (why plastics companies promoted recycling) but other critiques known in these parts might include that it's a way for privileged people to feel they're doing the right thing without ever having to entertain human empathy; or that it's a way for rich people to gain the reputation rewards of public charitable donations without even bothering to think about what causes they should support; or that the intellectual pedigree from eugenics to transhumanism to singularitarianism to basilisk-worrying to computronium-maximizing is all right there in the open and it's been the same kinds of people in this club all along
Nitpick, but recycling isn't an individual action; participation in a recycling program is, but the programs themselves are collectively organized by towns, communities, companies, etc.
This is about Longtermism, not Effective Altruism, and [MacAskill's own web site says this](https://www.williammacaskill.com/longtermism): > future people have moral worth. Just because people are born in the future does not make their experiences any less real or important. As far as ethics goes, this determination is, erm, controversial to say the least because *why* do people have moral worth? He writes, "morality, at its core, is about putting ourselves in others’ shoes," turning the notion of "the Other" to refer to *hypothetical* people, whereas in most traditional ethics, the difference between the actual (a real Other) and the hypothetical is quite at the core of things. Rather than trying to draw lines between the actual and the hypothetical, he seeks to erase the line and to place the actual and the hypothetical on equal footing, which necessarily takes much away from the actual.
>Just because people are born in the future does not make their experiences any less real Is this man a god? Does he experience all moments simultaneously as one? "There's no difference between the present and the future" - Will MacAskill, apparently.
I'm not actually directly familiar with his writing, so, this is a response to what you've said directly (rather than a defense. I think in non-philosophical circles, people commonly relegate not only future (non existent) people to hypothetical status, but the future state of those who do exist. The excuses tend to be "we don't know their preferences" or "we don't know their technological capabilities/constraints". We typically refer to this as shortsightedness or selfishness; isn't longtermism just a poor overcompensation for that? As such, isn't it wrong for "detail" reasons, rather than a fundamental divide between "actual" and "hypothetical"?
I mean, sure, you can quibble with his more technical views about discount rates and such, but that’s pretty far afield from the critique in the comic.
I don't think so because the question of *why* human life has moral value in the first place is at the core here, and if you want to answer it in some novel way it raises legitimate questions as to why.
No I get that. I’m saying there’s a difference between MacAskill being a serious philosopher who happens to be incorrect about the value of future people, or the decision theory involved in weighing our present actions, or utilitarianism or whatever, and the critique in the comic (and explanation) that suggests fraudulence and bad faith.
[deleted]
Sure! I think people shouldn’t claim dishonesty if the evidence isn’t there.
[deleted]
I don’t follow. I want other things too, but yeah, any improvement is good. I’m not sure what I said would imply otherwise.
[deleted]
Very philosophical of you to bungle an accusation of moving the goalposts, and getting tilted when it’s pointed out.
[deleted]
😢
[deleted]
I is as the good lord made me.
Existential Comics is humour -- always has been -- and the point that Longertermism avoids asking all the hard and relevant questions (of *why* people or decisions matter) in a way that just so happens to appeal to robber barons works (at least by the usual standards of that comic).
> and the point that Longertermism avoids asking all the hard and relevant questions (of why people or decisions matter) This is just factually false. Lingtermist philosophers are very active in the literature explaining and defending their views about discount rates, decision theory etc. it’s fine if you disagree with them, but pretending that they’re ignoring relevant questions is straightforwardly false.
I'm not talking about discount rates but about why people matter in the first place. You cannot even begin to consider how to extrapolate utilitarianism and consequentialism to people who do not exist unless you first grapple with *why* they matter. I'm not saying Longetermists don't have implicit or perhaps even explicit approaches to that question -- although I'd appreciate a reference to an explicit one -- but they're not emphasizing it even though it is the foundation of that philosophy's peculiarity. To take a crude example, an ethicist may argue that people matter because if you misbehave toward them they might hurt you (or others who see you misbehaving will distrust you and hurt you) -- to grossly oversimply the social contract approach. Obviously, if that's why people matter, then the very notion of extrapolating it to people who may or may not live in the distant future obviously doesn't work.
Do you think anyone answers that question in a satisfactory way?
I think that philosophy is mostly about grappling with questions in an honest way, not about finding answers.
Isn't sidestepping some questions a pretty reasonable approach in a lot of cases? I'm not well read in longtermism, but, it seems like "what if future people have moral worth that's on the same scale as current people" seems like a reasonable question to try to answer, without having established where the moral worth of current people comes from
> without having established where the moral worth of current people comes from I don't see how. To use my simplistic example -- which even MacAskill touches obliquely (he mentions reciprocity as a factor that may be relevant) -- if the moral value of people comes, say, from their being able to harm you, then clearly people in the far future have no moral value at all, let alone hypothetical people (who really are the focus more than future people). You cannot even begin to ask whether you can extend moral value from a living person to something else unless you first ask what is the property that confers it on the living person.
> You cannot even begin to ask whether you can extend moral value from a living person to something else unless you first ask what is the property that confers it on the living person. Hmm, I think the example with reciprocity is actually a good one for an alternate take -- you could also say that it's about the *consequences* of conferring it to someone. The social contract take just happens to be a source of moral value that is somewhat consequentialist in nature. But there's a whole universe of possible implications for an idea like "imbue future humans with moral value" that you can explore, philosophically, without having a clear answer for why you might do so. A clear answer may be impossible. Or, you could consider much of the spectrum of reasons we *might* convey moral worth, without attempting to commit to an answer. And, maybe as a direct counterpoint -- people have come at the moral value question from many theological angles (and that's a narrow slice of possible approaches) and yet, their subsequent thoughts managed to be part of a broad philosophical discourse. Surely, if the basis for moral value is something as narrow as a particular interpretation of "soul", then it would form a poor foundation for communicating with those who have a different religious outlook or philosophy.
> But there's a whole universe of possible implications for an idea like "imbue future humans with moral value" that you can explore, philosophically, without having a clear answer for why you might do so. I agree, but longtermists are activists. They don't so much explore a notion as prescribe action or, at least, a very specific stance.
I’m not really sure the argument here. Like, just moral anti realism? Anti utilitarianism? When you say why, do you mean causally or epistemically? Or something else? I don’t think that they can really be faulted for not responding to objections that you just thought of. It seems like your objection is either so broad that it applies to all moral philosophy (in which case, you should take it up with meta ethicists, not long termists) or it’s some bespoke objection that you thought of, but isn’t really raised in the literature, in which case, it seems weird to fault them for not responding to it. EDIT: I’m not sure if you edited or I just didn’t see it: > To take a crude example, an ethicist may argue that people matter because if you misbehave toward them they might hurt you (or others who see you misbehaving will distrust you and hurt you) -- to grossly oversimply the social contract approach I mean, longtermists are straightforward utilitarians, generally of the hedonistic type. Is your view that utilitarianism doesn’t give a good answer to why people matter? Or that adopting utilitarianism, there isn’t an answer as to why currently non existing people matter? If it’s the latter, that’s straightforwardly just the discounting question, but you said your argument isn’t about that. If it’s the former, I do think there’s a pretty robust utilitarian literature.
I think it's fairly obvious that any attempt to extrapolate a property (moral worth) from an original category (people) to a broader one (people and future hypothetical people) must first examine why the property applies to the original category. That's just a basic prerequisite. Otherwise, it's as meaningful as saying, fish live in the sea, fish are animals, therefore all animals live in the sea. You cannot go from "people have moral worth" to "hypothetical people have moral worth" without first examining why people have moral worth. > Is your view that utilitarianism doesn’t give a good answer to why people matter? I don't think it matters for utilitarianism, as it doesn't seek to broaden the category of people to non-people. But an ethics that does do that (longtermism) must justify why the extrapolation applies.
I’m not sure how this responds to what I said. Like, on most conceptions of utilitarianism, it’s not really people per se that are the good, but the capacity to accumulate utils. Why wouldn’t future util carriers matter? There’s obviously the risk that they might not ever exist, but that’s a problem for the decision theorists. Can I ask how familiar you are with this literature in particular, and modern analytical philosophy more generally? I don’t mean it in a ‘fuck off, pleb’ kinda way, I’m just trying to understand at what layer of abstraction and granularity you’re coming at this with.
Even utilitarianism is predicated on the existence of some party that experiences happiness as the result of utility. Longtermism isn't about future happiness of some party, nor even even about the future happiness of future parties, but considers it a moral obligation to ensure the proliferation of future parties, i.e. it is concerned with *hypothetical* beings, and so is far from a straightforward expansion of utilitiarianism. Utilitarianism doesn't necessarily posit that our obligation is toward the *universe* to increase the number of beings that can experience happiness (nor does longtermism, that for obvious-yet-not-explicated reasons has a fixation on the human species). But the main problem is that the core *argument* of longetermism is based on extrapolation that isn't justified. Most moral philosophies ask toward whom we have a moral obligation. In many, the *conclusion* is all people. But longtermism uses an empty rhetorical trick of extrapolation. Its argument is that if all people have moral value, then surely future people do (and hopes we don't notice when when future people -- i.e. people that will exist -- are replaced by hypothetical people; people who *could* exist). But without examining why all people possess the properties that bestow moral value on them you cannot extrapolate that to new categories that may or may not possess that property. It's an argument that's as valid as "all fish live in the sea, therefore all animals live in the sea."
> But longtermism uses an empty rhetorical trick of extrapolation. Its argument is that if all people have moral value, then surely future people do (and hopes we don't notice when when future people -- i.e. people that will exist -- are replaced by hypothetical people; people who could exist). But without examining why all people possess the properties that bestow moral value on them you cannot extrapolate that to new categories that may or may not possess that property. I’m sorry, I simply don’t think that you’re familiar with the literature on utilitarianism if you think that this is what’s going on. Can you link a paper where longtermists supposedly pull this trick? Like, there are plenty of academic critiques of long termism, but this isn’t one of them. Again, most utilitarianism holds that utils, or welfare or stuff is what matters, not agents per se. It’s kinda telling that you refuse to say how well read you are on the topic, and just advance your pet argument.
Trying not to broaden this discussion too much, let me say that it is true that some part of my criticism toward longtermism is applicable to utilitarianism in general and its focus on "mass ethics", but that's not what I mean by longtermism's rhetorical tricks. Nor do I mean MacAskill's only direct treatment of some utilitarian ideas in *What We Owe the Future* in the chapter on population ethics, even though he does take what I view as [Sidgwick's assertion of faith to the "Utilitarian formula"](https://www.laits.utexas.edu/poltheory/sidgwick/me/me.b04.c01.s02.html) (which he doesn't want to dwell on, focusing his efforts on the welfare of women and the poor) as an urgent statement of policy (although this total view is not universal even among remaining utilitarians). Rather, I mean the way that MacAskill presents this totalist view as a natural extrapolation of virtually *any* ethics. In fact, he begins this book by stating extrapolation as his primary rule: "I see longtermism as an extension of these [social justice] ideals." His argument really boils down to: if people matter than future people must matter, if future people matter then there may be enough of them so that they must matter even more, and if future people matter a lot then hypothetical people must also matter (Often, when he does those extrapolations in the book and they seem too obvious he justifies them with, "well, I don't know if the extrapolation is correct, but *if* it is, it's important!"). He extrapolates without justification, or perhaps to hide the fact that Sidgwicks extrapolations (how seriously he took them is debatable) were predicated on an a-priori adherence to a very particular axiomatic system of ethics, one that even utilitarians view as a challenge to utilitarianism, rather than an obvious truth. In other words, probably as an oversimplification, the real answer to "why and to what extent people matter" that would justify such extrapolation is, "because a formula based on a problematic axiomatic system says so." Of course, MacAskill doesn't say that. As to academic critiques, serious academic discussion of longtermism -- either pro or against -- is almost nonexistent (at this time), possibly by design. Longtermists prefer to publish either popular or policy texts because they're mostly interested in influencing policy, while serious academics don't want to lend longtermism credence by engaging with it. Outside of EA/longtermism-affiliated institutions that argue amongst themselves, the most common "academic critique" of longtermism is a snort. If you want to see a serious discussion of similar ideas, look for critiques of Sidgwick (you can find some [here](https://plato.stanford.edu/entries/sidgwick/)).
> His argument really boils down to: if people matter than future people must matter, if future people matter then there may be enough of them so that they must matter even more, and if future people matter a lot then hypothetical people must also matter I don’t think it’s really fair to go to a popular book, notice that the argument isn’t fully formally valid, and conclude that the position is therefore a rhetorical trick. What popular book do you think is not guilty if this? Nobody in the popular press writes in syllogisms. Like, any book is going to begin with assumptions that some will reject. Most science books don’t begin with proof of Cox’s theorem or discussions of Solomonoff induction, or anything like that. I don’t think it’s a particularly immodest assumption that future generations’ interests matter. You might as well complain that he doesn’t defeat ethical noncognitivism either. I don’t mean to be rude, but I think that if you’re going to critique working philosophers, you should at least start by seeing what other professional philosophers are saying about them, not developing your own half baked critique. That’s not to say you should discard your own ideas, but I see lots of people on the internet going off into their own intellectual universe that has tenuous connection to the actual field. I have no idea where you’re getting the view that MacAskill’s views are downstream of some kind of blunt axiomatic system. Since you keep avoiding the question, and are just responding to a popular work, I gotta think that you’re just upset that a popular book was not as tightly argued as you like. > As to academic critiques, serious academic discussion of longtermism -- either pro or against -- is almost nonexistent (at this time) How many papers/year do you consider almost nonexistent? It seems like a pretty active area to me. MacAskill, Ord, Bostrom, as well as a pretty big klatch of new PhDs are publishing at a pretty good clip.
[deleted]
Idk, the field is pretty fractured these days, if the argument is just that nobody has pull, because of how diverse the field is, fair enough. But anyway, they all have reasonable citations and publish in decent journals. Idk what measure you’re using to measure pull.
[deleted]
> the field’s being fractured, they’re just not in direct communication with the rest of academic philosophy in any particularly meaningful way. I think publishing in Mind and Nous puts them in contact in a pretty meaningful way. Lots of philosophers are spinning off on weird pet projects, I don’t see why them doing it puts them meaningfully outside the mainstream. If it does, then like half the profession is outside the mainstream. > but even then from very early on he’s explicitly pursuing EA as an extension of his academic work which in the timeline quickly breaks off from the academic world anyway. I don’t follow the relevance. Does Singer not have mainstream pull on the grounds that he has his animal stuff as an extension of his academic work? I don’t get how you’re using ‘mainstream pull’ if that puts one outside of having it.
[deleted]
> I’m not going to try to scholastically argue the conditions for being in or out of contact with mainstream philosophy Thank goodness. I was beginning to think you’d go around on all the threads my hot take spun off and quibble with everything that annoyed you. > just because you can bring up a publication in Mind or Nous here and there (you don’t specify) Don’t specify what? Which journal? MacAskill has recent pubs in both. > You say so yourself you don’t keep up with mainstream academic philosophy, just the longtermism stuff No. Read again. I said I DO keep up with longtermism, and DON’T keep up with epistemology and Phil of sci. Let’s put on our logic-we-teach-to-hungover-freshmen hats and see what we can or can’t conclude from that! > I think you should take it as plausible that you just have a completely skewed view of what’s going on. It’s plausible. It just seems like the only one telling me that that’s what’s going on isn’t very good at the basic skills needed for analytic philosophy.
[deleted]
Though, it's a curious sociological problem as to why philosophers publish work in respected journals and hire lecturers for respected positions when 99.9% of the people working in the field regard the work in question to be one hair short of -- if at all short of -- crankery. Maybe this is just the usual problems of publish or perish overproducing literature and the neoliberalization of the academy prioritizing whoever can get media attention over whoever is doing solid research. But it becomes particularly jarring at moments like this.
People sharply disagree with longtermism, but I doubt you could break 50% saying it’s in the realm of straight up crankery. I think you’re grasping to explain why the thing that pisses you off isn’t reviled as much as you’d like.
> I doubt you could break 50% saying it’s in the realm of straight up crankery. Have you run this by the faculty in your local philosophy department, or done anything relevantly like this?
I’ve had water cooler discussions. But no, nothing like a big poll. Like, my experience is that the field doesn’t really have strong epistemic standards for what counts as good and bad philosophy, outside of straight up reasoning errors, or writing unclearly. A math crank is a specific thing because there are well agreed upon rules as to what counts as good math. Most of the time, unless something is like formally incorrect, or downright evil sounding, most philosophers just sort of say ‘yeah that’s not my bailiwick, but I think they’re off in the wrong direction’ when encountering things that they think suck.
> Like, my experience is that the field doesn’t really have strong epistemic standards for what counts as good and bad philosophy, outside of straight up reasoning errors, or writing unclearly. To be honest, I don't know how this belief could be sustained through so much as paying attention during a PHIL 101 class, so I'm not sure what to make of your appeals to your personal experience with the field. The idea that the typical view among philosophers is that as long as the inferences are valid and the language is clearly stated, there's no strong commitments to have about anything else, is just plain outlandish. > most philosophers just sort of say ‘yeah that’s not my bailiwick, but I think they’re off in the wrong direction’ when encountering things that they think suck. I was going to suggest you may be encountering a philosopher-to-normal translation issue, but you seem to have recognized such a statement as academese for "Sorry, but that ideas sucks." In which case, I'm not sure what remains of the contention that was at hand.
> The idea that the typical view among philosophers is that as long as the inferences are valid and the language is clearly stated, there's no strong commitments to have about anything else, is just plain outlandish. I’m being hyperbolic, but only a bit. You also have to refrain from saying something is intuitively plausible if only a schizo or fascist would believe it. You should probably only argue about things that other philosophers have argued about (it’s ok if it’s something stupid). Etc. in decision theory, and history of Phil, you have to meet the standards of outside fields I guess. I don’t think this line is particularly heterodox- Korsgaard raised a somewhat less hungover-redditor version in her Dewey lecture last year. > but you seem to have recognized such a statement as academese for "Sorry, but that ideas sucks." In which case, I'm not sure what remains of the contention that was at hand. ‘Sucks’ and ‘crankery’ are not synonymous I think. If they were, then the median philosopher thinks like half their colleagues are cranks. Tons of people think Searle’s work sux, but if that makes him a crank, I’m not sure what crankery means. I think there ought to be a ‘if everybody is a crank, nobody is’ escape hatch.
> I’m being hyperbolic, but only a bit. You also have to refrain from saying something is intuitively plausible if only a schizo or fascist would believe it. Yeah, again I don't know how this belief could be sustained through so much as paying attention during a PHIL 101 class, so I'm not sure what to make of your appeals to your personal experience with the field. I mean, it's not helping your case much that in defense of the non-crankery of the ideas you're championing, you're coming across like a crank. In any case, among the number of factors I find distasteful among EA and allied Rationalist sentiment, is the explicitly acknowledged principle -- and it hardly wins you points on this front to concede to sniping at Yudkowsky, when MacAskill is also explicit about this "hide your power level" shtick -- to hide the actual claims of the movement when engaged in public discourse, on the grounds that they're too important not to champion but the masses aren't prepared to understand them. So that, one has to -- by their own admission -- wonder if anything an EA or Rationalist allied partisan is saying to you, outside an EA/Rationalist insider space, is or isn't part of a deliberate scheme of self-conscious lies. Among the effects of this particular sentiment of distaste is that it renders me less inclined than others here to spin away my hours writing dozens of comments for someone who everyone knows may be deliberately lying to me the whole time. Which is why, after all, I hadn't initiated the conversation with you in the first point. And why, at this point, I'll leave you to it.
> Yeah, again I don't know how this belief could be sustained through so much as paying attention during a PHIL 101 class, so I'm not sure what to make of your appeals to your personal experience with the field. What are some of the standards, on your view? I'll grant that I'm probably significantly less experienced in the field than you, but those seem to be the main things to my mind that gets a paper accepted. > When MacAskill is also explicit about this "hide your power level" shtick -- to hide the actual claims of the movement when engaged in public discourse, on the grounds that they're too important not to champion but the masses aren't prepared to understand them. So that, one has to -- by their own admission -- wonder if anything an EA or Rationalist allied partisan is saying to you, outside an EA/Rationalist insider space, is or isn't part of a deliberate scheme of self-conscious lies. I've not heard that MacAskill does this. I've seen that Singer at least conceptually defends esotericism (and so probably really is hiding his power level) - is that what you meant? I definitely dislike this kind of thing, and if MacAskill is doing it, I'm 1000% off the MacAskill train. Everything I've seen from him, he seems to be pretty much on the "it's better to be scrupulously honest" train tho. > who everyone knows may be deliberately lying to me the whole time. Which is why, after all, I hadn't initiated the conversation with you in the first point. And why, at this point, I'll leave you to it. I mean, this would be a pretty poor use of my time if I were lying.
> So it’s just MacAskill now? Well, you said I was deluded if I believed any of them had mainstream pull (unless you meant I’m only deluded if I thought the three as a collective had mainstream pull? That’d be strange). I went for the clearest cut counterexample. Sue me for being a busy man. Your mother doesn’t please herself after all. > I take it from the fact you named two fields in analytic philosophy plus longtermism that you don’t keep up with the rest of analytic philosophy either. That sounds like a yp, that you’re turning into an mp.
[deleted]
> Insufferable hair-splitting looks bad on you when you introduce terms I didn’t use, “any of them”. How’d you mean it then? As a collective? Like, they can individually have pull, but once grouped together, they lose it? If that’s what you meant, fair enough, but that seems kinda implausible. > If we’re going to go down that blasted road then yes, like I said, I will give you that MacAskill Kinda fucked up that mr (Dr?) philosophy-knower couldn’t just look into what was being published before condescending to a nobody redditor.
> notice that the argument isn’t fully formally valid, and conclude that the position is therefore a rhetorical trick You're right that my characterisation of MacAskill's bad arguments as a rhetorical trick is my own interpretation. It is possible he's just bad at argumentation or so attracted to his conclusion that he doesn't notice how unconvincing is the way there to those who aren't. Although he's not as bad as Bostrom. > I don’t think it’s really fair to go to a popular book, notice that the argument isn’t fully formally valid, and conclude that the position is therefore a rhetorical trick. Longtermism is primarily confined to popular work, as it's a popular project. What would be unfair is to consider their work not in the domain it is intended to have an effect (and harder, because there isn't much there). > you should at least start by seeing what other professional philosophers are saying about them Not much about longtermism. > MacAskill, Ord, Bostrom, as well as a pretty big klatch of new PhDs are publishing at a pretty good clip. These are all people working on the same project, at the same instituition. Most of what they publish on the subject is popular work, position papers, and policy papers. It's like talking about one of those American think tanks and pointing out that the members often cite each other's work and seem to take it seriously as a sign that the think tank is considered serious. BTW: > Most science books don’t begin with proof of Cox’s theorem or discussions of Solomonoff induction Are you a Rationalist?
> You're right that my characterisation of MacAskill's bad arguments as a rhetorical trick is my own interpretation. It is possible he's just bad at argumentation or so attracted to his conclusion that he doesn't notice how unconvincing is the way there to those who aren't. Although he's not as bad as Bostrom. It’s also possible that you’re an armchair philosopher and have a weird pet counter argument that he understandably didn’t foresee. Like, your proposed social contract theory that wouldn’t entail longtermism - it’s not even clear it’s realist! I don’t think you’d find a moral philosopher who wouldn’t think it’s fine to have a popular book about practical ethics assume realism at least! > It's like talking about one of those American think tanks and pointing out that the members often cite each other's work and seem to take it seriously as a sign that the think tank is considered serious. Looking at just MacAskill’s refereed papers, it doesn’t look like many of the citations are from Oxford. Again, you’re not really showing yourself familiar with the literature, but perfectly comfortable making sweeping pronouncements. > Are you a Rationalist? No. I’m as annoyed as anybody that legitimate Phil of Sci is popularized by a guy peddling HP fanfic and a robocalypse. Since I’ve answered your question, can you just answer mine? What’s your background wrt analytical philosophy? Do you generally keep up with the literature?
> It’s also possible that you’re an armchair philosopher and have a weird pet counter argument that he understandably didn’t foresee. True, except that my arguments are fairly common and quite old, except that no one has grappled with a restated axiomatic utilitarianism in a while (by that I mean outside of referring to old works), and it doesn't seem like anyone is in a mood to do so at this time, including the longtermists themselves. They seem to be mostly interested in convincing policy makers rather than scholars, and address their output to them. If they write popular works, we need to judge them as such. MacAskill focuses on the more obvious objections that would follow from accepting his premise but rejecting his conclusions based on an inability to predict the future. But a concrete moral plan that explicitly states that the value of *hypothetical* people is greater than that of actual people must have its premises examined. Indeed, if we view longtermism as the concretisation of [Sidgwick's conclusion](https://www.laits.utexas.edu/poltheory/sidgwick/me/me.b04.c01.s02.html) that "It seems, however, clear that the time at which a man exists cannot affect the value of his happiness from a universal point of view; and that the interests of posterity must concern a Utilitarian as much as those of his contemporaries," then the fact that Sidgwick's whole point is that this is a conclusion of [a certain axiomatic system](https://www.stern.nyu.edu/sites/default/files/assets/documents/con_037040.pdf) that rests on a "universal point of view" while MacAskill seems intent on avoiding stating those axioms becomes quite crucial, certainly given that many objections to Sidgwick rest on rejecting his framework. Even if we can accept moral realism as an implicit assumption, the axioms that lead to that longtermist conclusions are not universal among realists. > Like, your proposed social contract theory that wouldn’t entail longtermism I merely pointed out that the ability to extrapolate *any* property (e.g. moral worth) is crucially dependent on stating why and how it applies to the original category (e.g. people); Sidgwick clearly believed that as he used his axioms as justification for his extrapolation (although I don't know how seriously he took that extrapolation as a matter of policy). > Looking at just MacAskill’s refereed papers, it doesn’t look like many of the citations are from Oxford There are barely any non-popular/policy papers on longtermisms or citations at all, and by barely any I mean virtually none. To go back to my "rhetorical trick" claim, look at how [one of the *relatively less* popular essays on longtermism](https://philarchive.org/archive/GRETMC-3) avoids the axioms at its core. That essay clearly states the policy position: "we can have a much bigger effect on the value of the future by trying to change its long-term rather than its short-term value. That in turn suggests that we should devote much more of our focus to considering the long-run effects of our decisions, and makes plausible the strong longtermist claim that, in many situations, we ought to perform the action that we expect will have the best effects on the long-term future." And yet it pulls the same trick of seamlessly going from *future* people ("The case for strong longtermism begins with the observation that our future could be vast") to *hypothetical people* ("So, if we can reduce the chance of human extinction, we can predictably improve the long-term future" -- that future is not the wellbeing of future people but the existence of hypothetical people). > legitimate Phil of Sci If longtermists are interested in making longtermism a "legitimate Phil of Sci" they haven't yet shown an interest in doing so, although it's still all fairly recent and things could change.
[deleted]
With the longtermist related stuff, generally yeah. With Phil of Sci and epistemology, no, unfortunately.
As a communist and an academic, it's hard to argue in good faith against what you've said about academy-adjacency. Here, American communist makes the same complaint about (among other things) the inauthenticity of leftism among academics. https://youtu.be/Yg19NJgVEcI
[deleted]
😢
[deleted]
> lazily and pre-emptively dismissing people I'm not sure there was laziness or preemption going on here? Or dismissal, for that matter.
[deleted]
It's not exactly coherent but I wouldn't call it a lazy or preemptive dismissal... It is kind of a strawman. It is almost certainly wrong (if I'm reading it correctly, again, not super coherent)
[deleted]
I genuinely like how you're like the resident ice bitch (for lack of a better word) of this subreddit
I’m not really a SH fan, the subreddit is just good for medium intelligence people to discuss stuff. In any event, I’m not sure what I’m lazily dismissing. I said it was a low status hot take. Not sure what you’re expecting. I think calling someone a baby is probably a more lazy and dismissive thing to say, but that’s just me.
[deleted]
I didn’t say I don’t believe it. I do. I’m saying that I don’t believe it strongly enough for me to dismiss anyone who disagrees.
[deleted]
What? “Hey, here’s what I think is psychologically motivating these critiques” is not per se dismissive of the actual critique. Like, if you’re just against trying to understand a broader context for argument, that’s fine, but that’s much broader than my take.
[deleted]
What specifically about my post do you feel was dismissive, beyond just generally having a position on psychological motivations behind critiques?
[deleted]
Lol.
[deleted]
I think you should just argue your point.
[deleted]
😢
we should point out that one thing sneerclub absolutely is not is a debate sub
Calling something "low status" as an insult is *such* a tell, dude. I do have to congratulate you on the efficiency of indicating, in just two words, that literally nobody should take you seriously!

[removed]

Maybe you'd find things more comfortable in your usual haunts, like PCM? I'm sure you can go and complain about wokeness and people will smile and clap and you'll feel a little spark of self-satisfaction.
>Maybe you'd find things more comfortable in your usual haunts, like PCM? jesus christ even pcm posters don't deserve sneers that harsh
AFAICT, the cartoonist is one of the harmless idealistic communists, not a tankie. So the cartoonist's specific political views are all but irrelevant to the sneer.