r/SneerClub archives
newest
bestest
longest
I made a long (1h48m) video about issues with LessWrong after being involved with it for about 11 years. I also added an alternative reading list for those who want to learn about rationality without reading the Sequences. (https://www.reddit.com/r/SneerClub/comments/11d8ec4/i_made_a_long_1h48m_video_about_issues_with/)
80

video: https://www.youtube.com/watch?v=Fl_ZQlYKdz4

crucial note: a majority of the sequences are not about rationality. They are instead about pet topics that the thought-leaders of LW care about. (The Metaethics sequence was both irrelevant to rationality and painful. If you are interested in metaethics I recommend the recommendations at https://leiterreports.typepad.com/blog/2019/12/best-introductory-texts-in-moral-philosophy.html which are by professional philosophers; Daniel Weltman’s suggestions particularly stand out.)

I tried to make the reading list focused on rationality specifically. I know the politics of these authors aren’t appreciated here; that is what philosophy and political science are for. This list was specifically oriented around reducing bias, i.e. literally being less wrong.

——————————————————————

Here’s my rationality reading list, for those who are interested in getting a better foundation than what is provided by the Sequences/HPMOR:

  1. while unorthodox, I usually suggest this above everything else: the PowerScore Logical Reasoning Bible, while meant as LSAT prep, is the best test of plain-language reasoning that I am aware of. the kinds of questions you are meant to do will humble many of you. https://www.amazon.com/PowerScore-LSAT-Logical-Reasoning-Bible/dp/0991299221 and you can take a 10-question section of practice questions at https://www.lsac.org/lsat/taking-lsat/test-format/logical-reasoning/logical-reasoning-sample-questions — many of you will not get every question right, in which case there is room to sharpen your ability and powerscore’s book helps do that. (edit/note: there are three subsections of the LSAT, and one is titled “logic games”; the ‘games’ section is not this, and also IMO not useful. I specifically think the reasoning section excels in evaluating plain-language reasoning, in the sense that it is widely available and normed to about 100,000 people annually and requires no former training in logic. it can be considered a test of informal logic, but it is not a test of formal logic by any means.)
  1. https://www.amazon.com/Cengage-Advantage-Books-Understanding-Introduction/dp/1285197364 in my view, the best book on argumentation that exists; worth reading either alongside PowerScore’s book, or directly after it.
  1. https://www.amazon.com/Rationality-What-Seems-Scarce-Matters/dp/B08X4X4SQ4 pinker’s “rationality” is an excellent next step after learning how to reason through the previous two texts, since you will establish what rationality actually is.
  1. https://www.amazon.com/Cambridge-Handbook-Reasoning-Handbooks-Psychology/dp/0521531012 this is a reference text, meaning it’s not meant to be read front-to-back. it’s one of the most comprehensive of its kind.
  1. https://www.amazon.com/Handbook-History-Logic-Valued-Nonmonotonic/dp/044460359X — this is both prohibitively and ludicrously expensive, so you will probably need to pirate it. however, this history of logic covers many useful concepts.
  1. https://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374533555 this is a standard text that established “irrationality” as a mainstream academic concept. despite being a psychologist, some of kahneman’s work won him the nobel prize in economics in 2002, shared with vernon smith.
  1. https://www.amazon.com/Predictably-Irrational-audiobook/dp/B0014EAHNQ this is another widely-read text that expands on the mainstream concept of irrationality.
  1. https://www.amazon.com/BIASES-HEURISTICS-Collection-Heuristics-Everything/dp/1078432317 it is exactly what it says: a list of about 100 cognitive biases. many of these biases are worth rereading and/or flashcarding. there is also https://en.wikipedia.org/wiki/List_of_cognitive_biases
  1. https://www.amazon.com/Informal-Logical-Fallacies-Brief-Guide/dp/0761854339 also exactly what it says, but with logical fallacies rather than biases. (a bias is an error in weight or proportion or emphasis; a fallacy is a mistake in reasoning itself.) there is also https://en.wikipedia.org/wiki/List_of_fallacies
  1. here is another fantastic handbook of rationality, which is a wonderfully integrated work spanning psychology, philosophy, law, and other fields with 806 pages of content. https://www.amazon.com/Handbook-Rationality-Markus-Knauff/dp/0262045079 (it is quite expensive – no one will blame you if you pirate it from libgen.)

You will learn more through these texts than through the LessWrong Sequences. as mentioned, many of these are expensive, and no one will blame you if you need to pirate/libgen them. many or maybe even most of these you will need to reread some of these texts, perhaps multiple times.

This will also not make you a god of rationality. This is the foundation, which is why some of these (to those who are familiar with them) seem basic: they are. That’s the point. And the Sequences are much worse.

Lesswrong is primarily made up of nerds who want a hangout group/subculture, rather than a means of learning rationality, and this disparity between claimed purpose and actual purpose produces most of the objections people have and many of my objections in my video, and why I have created this alternate reading list in the first place. A subculture or ingroup should not dictate what you are able to learn. For as much as people regard feminist philosophy as an echo chamber, nothing is stopping me from reading Judith Butler. Meanwhile, LessWrong expects a person interested in rationality to actively participate in their subculture to become more rational. This is, to say, the least, backward.

[deleted]

[deleted]
there's no need to speculate, these are the words of an EA leader: https://imgur.com/a/Bsbkesq
I'm not exactly a normie, and this instantly soured all involvement I wanted to have with EA. I viscerally feel the suffering of homeless near me. I've done things that aren't quite legal to help a homeless man get a job. the biggest issue seems to be getting a reliable address/phone that they can regularly access, because that's (from what I understand) the bottleneck on most job applications. I can't imagine what the general public would think if they saw this. it's unbelievable.
To be fair, this is because they prioritise people in low-income countries. I understand your emotional involvement to the issue and think it's admirable, but the basis of their ideology is that the empathy you feel isn't a reliable ruberic, and inherently prioritises people who are physically close to you rather than say, people most in need of help. I think it's good to focus on, but also agree it's not exactly aligned with EA.
right, I understand *prioritizing it*, which I think is fine. my issues are: * there is very little auditing of this. it is just blind donating. if we knew 100% that it was going to help someone in much worse circumstances that much more then sure, but I've never heard of anyone actually e.g. visiting the country where they're donating and seeing how this works from the inside. * if you donate to the right local homeless organization — I don't know what that is, let's just suppose there is one — you can see the effects locally, avoiding the need for an audit so, yes, if you had perfect transparency that would warrant prioritizing that over homelessness in your area. however, dismissing it? absolutely not, which I understand is not a universal POLICY per se but is certainly the attitude — a "don't want to get our hands dirty" sort of thing.
The reason why they donate to the global south isn't necessarily greater need, but greater spending power. Regardless of how much you donate to an impoverished person in the UK, you can do more for an impoverished person in Thailand with the same money. It's also to do with neglectedness. Impoverished Brits already have a lot more money spent on them than impoverished people elsewhere do. To be clear they don't raise money specifically for impoverished Thai people, but if you take the logic I've just expressed far enough then you wind up donating to the Against Malaria Foundation as well as De-Worming programs. I agree that the lack of oversight and auditing is problematic. It's one of the reasons I'm no longer involved in the community. I do think there are a lot of valid critiques of EA. It's just that their "neglect" of people in western countries doesn't strike me as one. That's not to say that people in western countries aren't suffering greatly and have enough high-quality attention to their problems. It's just that this is even more true for people in the global south. And I'm conscious that yeah, as little empathy and respect as impoverished Brits get, people outside the West simply aren't on the public's radar at all and are even more dehumanised + experience a lot less empathy. I'm not going to fault an organisation - EA or not - for shifting that balance. Or more generally, for prioritising causes they believe are the highest. The other thing I'd say is I think groups should be allowed to focus discussions to specific issues that represent their priorities and cause areas. I wouldn't go to a Sisters Uncut meeting and start talking about the issues of terminally ill children (without any gender analysis accompanying it), and then call them heartless or wrong for saying "hey, as important as this topic is, this isn't within the remit of what we discuss in our group". Tbh, I'd go as far to say it's putting them in a bad position if I then take screenshots and say *"look!!! They* ***ban*** *discussion on terminally ill children!!!"* No one wants to have to shut that type of discussion down but if I'm going to a space that's specifically for discussing feminism, and using it to platform an important but non-feminist issue, then I kind of had it coming. And I'd say the same for EA and stuff outside their cause areas. They've been clear about what their priorities and pre-requisite beliefs are before you join the group. If you enter *their* space and then start platforming stuff that's outside their cause areas, then it's going to get shut down. Not out of heartlessness, but just because if a space doesn't have boundaries then it's going to lose focus, and make no progress on anything.
>I do think there are a lot of valid critiques of EA. It's just that their "neglect" of people in western countries doesn't strike me as one. To say that donating money to people in underdeveloped countries is the most effective way to do good in the world is really just begging the question, and it's also revealing of the limited tool set EA people bring to the table. It's not obvious that spending money is the most effective way to do good in the world irrespective of where you happen to spend it. EA people commit the cardinal intellectual sin of having immense overconfidence in their understanding of the limitations of their knowledge. And even apart from such practical criticisms, I think there's a very real emotional ickiness about the EA philosophy too. Someone who believes that the suffering of anonymous people in faraway lands is equally as important as the suffering of a homeless guy that they see on a daily basis is someone who treats other people as abstractions. Their moral compass is suspect, and probably broken.
>Someone who believes that the suffering of anonymous people in faraway lands is equally as important as the suffering of a homeless guy that they see on a daily basis is someone who treats other people as abstractions. Their moral compass is suspect, and probably broken. First of all, those in global poverty are almost always suffering more, and you can help more of them with the same money. So the appropriate question is between the homeless guy and like 10 people in another place. Secondly, the only reason those in global poverty are "anonymous" to you is that you *haven't bothered to look them up*. They're too busy surviving to call you up and beg you personally. I have my problems with EA's "abstractions", which i think is valid when it comes to future people and to shrimp and so on. But to call actual, living human beings in extreme poverty "abstractions" is morally callous.
Yeah I second this. Human beings should be treated equally, but as a species we organise via social bonds and relationships, which compromises our ability to be objective about it. I think there’s something to recognising that *actually*, sometimes your emotions lead you to dark places that go against your expressed values. And that yeah, thinking in numbers isn’t exactly humanising, but may lead to “fairer” outcomes when it’s, say, training a service animal for one western person vs curing tens/hundreds of people in the global south of blindness. The future people talk interested me when I first saw it and I can’t say I disagree, in literal terms, *in theory*, if it’s 10 lives 1000 years in the future vs 10 lives now. But we don’t have that type of prediction power and as people here have pointed out, it creates a utility monster. I’ve been vegan since 2014 and don’t value nonhuman lives inherently less than I value human lives (though ngl, shrimp talk is pushing it). But I think playing God is incredibly dangerous, and their talk on nonhuman lives approaches this point. We need a realistic grasp of what we can and cannot control, but a lot of EAs (including and beyond those involved in animal advocacy) seem to think we can be the rational dictators of earth who force nature to be fair, and it genuinely scares me. The main issue I have with EA, generally, is that the community is rife with fundamentalists, who think that expected value theory and “Bayesian thinking” solves every problem on earth, essentially. They are evangelical too. And no matter how well meaning, this combination of traits is dangerous (in my books). That being said, this problem is hardly exclusive to EA. I spent an embarrassing amount of time yesterday arguing with tankies. On the other end of the political spectrum you get hardcore Adam Smith simps who refuse to see the injustice that’s rife in our society. Religious fundamentalists, generally, are dangerous. Even science fundamentalists can be dangerous when you look at the way psychiatry has been weaponised against black people, political dissidents, trans people, and women and people in those societies refused to see it. Sorry went off on a tangent, but yeah. I don’t think the ideas of EA are 100% wrong but they’re not 100% right or well suited to every problem, and a community who believes they are 100% right and tries to force their ideology onto every problem, is dangerous. Regardless of what that ideology is. I’ve come to believe that keeping things agnostic and actually having a degree of distance between your identity and your work, is a good way to prevent these issues. EA still has some valuable ideas that, in a secular, non-dogmatic context, seem to be valuable. Such as you can donate to much more many people on much more high stake things, if you spend money in places that are under resources and have higher purchasing power for the same money.
You can't look them all up. Everyone has finite time and energy. Most of humanity will necessarily remain anonymous to you forever and there's nothing you can do to change that. It's not morally callous to be aware that most people are anonymous to you, but it is morally callous to treat all people as mere abstractions without even realizing that you're doing it, which is what EA does. If we take the EA philosophy seriously and ask how best to act so as to benefit the most people over the longest time period, then it is not necessarily true that donating money to underdeveloped countries is the best strategy. It might instead be the case that increasing the capacities of developed countries so that they can help underdeveloped ones more easily is the most effective strategy, and that involves doing things like addressing poverty in developed countries. It's impossible to know if that's true, though. It takes only a small amount of humility to recognize that determining the optimal way to help humanity is not a solvable problem. Yet EAs cannot muster even that much humility, and have incorrectly convinced themselves that entire categories of altruistic behavior are beneath their consideration. ​ cc /u/EditRedditGeddit
I think it's morally callous to ignore peoples suffering just because they are anonymous to you. I don't need to "look people up" and make them do a dance of gratitude towards me in order to want to help people. You call this attitude "abstraction", I call it being a decent human being. I'm all for humility, but not as an excuse for inaction. We don't know the exact best way to combat climate change, but that doesn't mean we should put our hands in our pockets and not do anything about it. I'll fault EA for a lot of things, but I will never fault them for trying to good, or for trying to prioritize the areas which are most in need.
I don't think that's fair. I believe, in a conscious, intentional sense, that people suffering where I can't see them are just as important as people suffering where I can see them. I find the alternative strangely solipsistic: it implies that I can make someone less important by not looking at them. I also have a strong emotional reaction to actual humans suffering in front of me. I think part of the appeal of EA is that it suggests a way of reconciling that conflict. I don't think it's the ideal way; I think "what's the most effective way to allocate our charity dollars to help poor people?" is the wrong question because private charity is the wrong tool here. But it's certainly less emotionally-icky than the traditional questions, "how can we make the local poor go somewhere where I don't have to look at them?" and "how can we blame the local poor for their own suffering so that I don't have to feel bad for them?"
"Importance" isn't an objective metric; there's always the implicit question of "important *to who?*" You can't change how important someone is to other people, but you necessarily make someone less important *to yourself* by not knowing them. You can choose to believe that an anonymous person in a distant part of the world is important to you, in the sense that you can choose to believe pretty much anything if you really want to, but that belief doesn't correspond to any reality. *The anonymous person that you imagine as being important* doesn't exist; they're just an abstraction that you've made up in your own mind. You could just as easily imagine how important all the people living on Mars are, despite the fact that there aren't any. EA imagines the world to be more comprehensible than it really is, and in doing so it indulges in delusional thinking.
> but the basis of their ideology is that the empathy you feel isn't a reliable ruberic, and inherently prioritises people who are physically close to you rather than say, people most in need of help. Let's say my best friend needed help moving, and asked me if I could spend some time over the weekend helping out. But I declined saying it would better for the world for me to work some extra overtime and give additional income I earned to purchase mosquito nets. You could similarly imagine helping a local homeless person may allow them to purchase adequate shelter to prevent them from freezing to death on a very cold night. Which, in turn, may prevent children in your neighborhood from being traumatized by seeing the body of a homeless person on the way to school. Helping someone "local" to you has a kind of reciprocity that - if you ever received from purchasing mosquito nets - would take a very long time to circle back. In the same way people try to keep wealth in a community, it seems sense to me that you'd want to keep some amount of altruism in a community rather than sending all of it to the other side of the globe. If all of the goodwill leaves your proximate area, then your proximate area starts to suck. There should be a balance between this priorities.
I'm largely a defender of EA focusing on extreme poverty over poverty in developing nations and I would love some clarification on this. I don't think giving money to homeless is bad, but I strongly feel the money I donate every month should go to people who are starving or dying rather than people who are hungry or cold. How can this be a bad thing in a world of limited resources? Aren't you throwing those in extreme poverty under the bus, in a very hyperbolic sense, by donating to the merely homeless (gross term sorry)?
[deleted]
I think you're mixing up the forums and the facebook group. (The screenshot above was from the latter). The EA forum allows pretty much anything, even if the connection to EA is super flimsy. See the eugenics post from a few days ago.
[deleted]
I wasn't trying to dull the point, I was pointing out that you made a mixup. In my experience, a high-effort post on first world poverty would likely be tepidly upvoted on the forum, then promptly ignored.
[deleted]
People in *which* part of Africa? Certainly there are prosperous countries and regions all over the continent. There are also massive regions where people live in such extreme poverty that they can't afford to buy a 3$ malaria net for their families protection. Here is an [open letter](http://dear-humanity.org/letter-to-humanity/) from a person who grew up in one such region, literally begging the first world to directly fund grassroots third world anti-poverty efforts. They are still highly critical about EA on the grounds that they think the money should go to non western-run orgs.
[deleted]
Did you reply to the wrong comment here? I'm saying people should donate to third world poverty, not to fucking MIRI.
1. You don't know if the money helping "extreme poverty" is actually doing that. 2. You can see the visible effect on poverty in your local area. 3. Most importantly, #2 involves *ACTUALLY MEETING* people suffering from that poverty. The "first world poverty" line is a way for rich introverts to avoid meeting anyone who makes them deeply uncomfortable while still feeling like they did a good thing.
Yeah but donating to people local to you inherently privileges people in western countries over people in the global south. If everyone could just donate locally then maybe I'd agree with you, but resources are not distributed equally. In my opinion, British people donating predominantly to British people is not that different (in fact, it effectively is the same as) white people donating predominantly to white people. POC and people from the global south get excluded from charity and activism if our focus is exclusively on those closest to home.
> Yeah but donating to people local to you inherently privileges people in western countries over people in the global south. That depends on where the pot comes from, surely? Are you donating to poor people in London over poor people in Mogadishu, or are you donating to poor people in London over rich people in Silicon Valley? Why ban donating to poor people in rich countries when you not only tolerate, but heavily encourage, donating to elite scifi snobs working on their hobbies? Homeless people aren't a priority because Africans, but rain money on smart high schoolers no strings attached, why?
[deleted]
Incredible that you inferred that from my comment. Just extraordinary.
[deleted]
I'm not. I'm just saying that when you live in a Western Country that has a history of colonialism and global dominance, that if you choose to spend your money "close to home" then you *are* choosing to spend your money based on someone's inclusion to a dominant group, while neglecting others based on their inclusion to an underclass who we exploit. That being said, that underclass who we exploit are *always* going to be disproportionately POC. "Hey, there are some black people in the UK" doesn't change the fact that the countries we exploit are often 100% black, and that we're perpetuating racial inequality on a global scale if we prioritise people from the majority-white country over people from the almost-exclusively-black one.
> pinker's rationality yes I know how much this sub thinks poorly of pinker, I was wary of this myself. however I feel the rest of the recommendations offset this, since I think he's best seen as an *introduction to* rationality and not a text. (which fogelin/sinnott-armstrong is.) > and sources (or at least notating the places you are operating off working memory) so we can backfill any sources. the video has sources in the description: 1. sample WAIS report https://www.pearsonassessments.com/content/dam/school/global/clinical/us/assets/wais-iv/wais-iv-score-report.pdf 2. what is g https://www.youtube.com/watch?v=jSo5v5t4OQM 3. childhood IQ vs. adult IQ https://pubmed.ncbi.nlm.nih.gov/12887561/ 4. wonky attempts to measure IQ above 160 https://archive.vn/kFCY1 5. computer-based verbal memory test https://humanbenchmark.com/tests/verbal-memory 6. typing speed / IQ https://eric.ed.gov/?id=ED022127 7. simple choice reaction time https://www.psytoolkit.org/lessons/experiment_simple_choice_rts.html 8. severity of 83 IQ https://www.youtube.com/watch?v=5-Ur71ZnNVk 9. googleability of WAIS https://nda.nih.gov/data_structure.html?short_name=wais_iv_part101 10. uses of WAIS in clinical care https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3462502/ 11. drunk reaction time experiment https://imgur.com/a/IIZpTol 12. how g correlates with WAIS https://archive.vn/gyDcM 13. low murderer IQ https://archive.vn/SrenV 14. tom segura bit about the first 48 https://www.youtube.com/watch?v=B0l2l1PXqIE 15. rarity of perfect LSAT scores (30 out of 100,000) https://archive.vn/KWAzf 16. limits on human reading speed (1) https://archive.vn/IVU8x 17. limits on human reading speed (2) https://psycnet.apa.org/record/1998-11174-004 18. kinobody fitness callout by philion https://www.youtube.com/watch?v=WjytEso-c6k 19. summary of lesswrong drama (Jan-Mar. 2022) https://alfredmacdonald.medium.com/summary-of-drama-as-it-pertains-to-the-austin-lesswrong-rationalists-and-vibecamp-6ee40d230460 20. leverage / geoff anders pseudo-cult https://archive.vn/BKvtM 21. the questionability of michael vassar and related organizations https://archive.vn/8A8QO 22. sharp vs soft culture https://archive.vn/VOpya 23. something-in-the-waterism https://alfredmacdonald.medium.com/something-in-the-water-syndrome-a7c90c612074 24. on the fakeness of many bayesian priors https://alfredmacdonald.substack.com/p/your-priors-are-fake 25. criticism of the "postrationalist" subculture and the problems created by pseudonyms and hyper-privacy norms https://alfredmacdonald.substack.com/p/vibecamp-and-its-consequences 26. proliferation of "technoyogi" woo in this culture due to lack of BS-calling norms https://alfredmacdonald.substack.com/p/technoyogi-bullshit-and-cure-by-rubricism 27. questionability of the vitamin A charity I mentioned https://archive.vn/2AxlK 28. MIRI support from Open Philanthropy https://archive.vn/JW6WT 29. MIRI publication record https://archive.vn/9hIhT 30. MIRI staff https://archive.vn/hJeuT 31. MIRI budget, 50% of which is spent on research personnel https://archive.vn/z6bvz 32. benefits of sharp culture (or at least a mean robot boss) https://archive.vn/onIfM 33. daniel dennett on, among other things, the problems with treating all suffering as interchangeable https://archive.vn/5SLEy 34. on reading comprehension limits: https://catalog.shepherd.edu/mime/media/12/913/SU+Credit+Hour+Policy+Appendix+B.pdf -- while a 50th percentile student reads (with retention) at 250wpm and a 75th at 500wpm for "general expository reading (e.g. news)", this same group reads at a 50th percentile of 149wpm and a 75th percentile of 170wpm for "advanced scientific and/or technical material". assuming a gaussian distribution, the distance between 50th percentile and 75th percentile is 2/3s an SD -- so with an SD of ~31.5, reading said material at 306.5WPM is 5SD from the mean, or about 1/3.5 million. the average audible narration rate is 155wpm, so this severely puts into question those who say they're 2xing or even 1.75xing advanced audiobooks/lectures.
[deleted]
Right! This is why I suggested "the history of logic" among other things. People need to, well, read. They used to, in the 2009-2011 era of LessWrong; they don't anymore, or when they do it's like a pre-approved list. Like, Nozick was an option in the local book club. (He didn't win.) I could have suggested W. E. B. Du Bois and it would have gone nowhere, nevermind bell hooks or whatever. I'm for reading all of these people. Yet for at least three or four months out of the nine I was there, I had to decline to read the book because I had already read it — it was predictable stuff like Superforecasting. I just gave up. ⠀⠀*(Barack Obama voice)* ⠀⠀To be clear, I definitely don't think my recommendations are perfect. They're better than the sequences though, in the sense that whoever reads them will know a *lot* more about rationality.
[deleted]
Yeah I think that general ideas of logic and rationality (whatever that means, at least logic has a clearer definition) are less useful than reading different fields and actually developing a deeper base of knowledge in one of them. For instance, history (not the history of science or philosophy, just like, normal history) gives you an incredibly powerful set of tools for looking at the actions of human beings, the nature of human cultures, etc. The specificity of history are opposed to the universality of Philosophy and most sciences really does help you learn to think in a different way.
> People need to, well, read. They used to, in the 2009-2011 era of LessWrong I'm pretty sure they actually didn't do this in practice tho
sucks being the kids in the class who actually do the readings
[deleted]
I definitely don't want to promote a vicious and mean-spirited work. if it means anything, I have an odd brain condition that makes my eyes twitch; without the aid of drugs/exercise my eyes can only stare at text comfortably for about 6-10 hours a day, depending on how lucky I am that day. so, I prefer to listen to audiobooks when possible and listened to the audiobook of "rationality" and other works by pinker rather than reading them in book/ebook form. in all honesty, the narrator did not strike me as mean-spirited when I listened to "enlightenment now" and "rationality". this is in contrast to e.g. dawkins, who does sound that way. I might be missing something that's present in his textual version.
[Here](https://www.currentaffairs.org/2019/05/the-worlds-most-annoying-man) is a good breakdown of why those adjectives are appropriate to apply to Steven Pinker.
yeah, I kind of feel where you're coming from now, e.g.: > Pinker is supposedly “such a nice guy,” a person who is restrained and moderate and reasonable, who laments that politics has gotten so vicious and tribal. And yet in his books, you find him comparing environmentalists to Nazis and campus anti-bigotry initiatives to Stalin’s purges. Those he disagrees with are “quasi-religious,” “authoritarian,” they push “emotionally charged but morally irrelevant red herrings.” Al Gore and the Unabomber belong together. When anthropologist Jason Hickel critiqued Pinker’s theses in the Guardian, Pinker snapped that Hickel was a “Marxist idealogue” while leaving many of Hickel’s arguments unaddressed. Is this what the Chronicle called Pinker’s “relentless friendly persuasion, a kind of indefatigable reasonableness”? this is a similar thing that bothers me about lex fridman, although for slightly different reasons but very much in the same area code. sam harris goes the other end, and tone polices people like nassim taleb. and, now that I think about it, I have not seen taleb on any of the major podcasts like this, so it's possible there's some club-like thing going on, which I *have* observed on a smaller scale. (for a reverse of this, aella was invited to hereticon not because she is "heretical" — she moved to get me removed from our local discord, so she is no stranger to *hereticizing* — rather, she was/is on the good side of people who are connected to founder's fund.)
THE WORLD’S MOST ANNOYING MAN by NATHAN J. ROBINSON The irony of authorship.
Thanks for including it. And thanks for the list. I think Pinker’s rationality is an excellent and appropriate book to recommend.
*wipes sweat off of brow*
Imagine if you'd recommended Galif's The Scout Mindset
oh *fuck that*

[deleted]

[deleted]
👀👀👀

Grain of salt on ariely - there’s an ongoing investigation of some fabricated data in one of his studies, and his work in predictably irrational predates the psychology replication crisis

not grain of salt. I annoyingly saw a local discord mod google this controversy and use it as a reason to avoid reading anything he wrote. the way research is conducted at ariely's level involves a lot of subordinates, so if someone fucks something up and is exposed this means that this weakness exists for lots of large-scale coauthored research, and it's a question of what hasn't been whistleblown. you can't act like he's uniquely suspect when this same vulnerability exists for all researchers in parallel circumstances.
A grain, not a handful. I read up on it a bit. It seemed like the last person to touch an manipulate the data was Dan, who iirc said himself "this looks pretty damning". I don't consider it conclusive, but researchers are subject to the same incentive pressures as other industries, especially when the study in question was for/with an insurance company which was probably funding it. "no significant result" doesn't usually get more grant money and partnerships. I don't consider it completely conclusive, but I am appending a "maybe this doesn't actually hold up" to knowledge I got from predictably irrational. Ariely was one of the figures whose thinking and research resonated with me at the same time I was into LW's rationality stuff. We shouldn't tear down public figures and disregard their work just because they've put time and effort into becoming public figures and have come under scrutiny, but we should factor that in when deciding how much to trust them. We should especially look for large sample replication studies of their research.
yeah, all of that is fair. nothing to disagree with.
Sheiiiiit an internet disagreement resolved without a big dumb argument, this is going to give me a little extra happiness and faith in humanity all week, thanks stranger 🙂

[deleted]

A few things: - my list doesn't focus on formal logic. I think formal logic is less useful than informal logic, because even if you can perfectly symbolize/formalize something you can get into a debate over whether it's symbolized properly. - the LSAT has three sections: (1) Critical Reading, which is exactly what you think it'll be like; (2) Logic Games, which is probably what you're thinking of when you've heard about the LSAT facing criticism; finally, (3) Logical Reasoning, which is a plain-language test of logic. The logic games section indeed has very little to do with logical reasoning ability, and was ruled discriminatory *and* is by far the most trainable part of the test insofar as memorizing certain techniques for each question type will consistently get you a good score on that section. The LSAT is removing the logic games, and basically no one — including LSAT prep tutors — has lamented this. > your repeated claim that it is the most rigorous logic test available anywhere in English seems incredibly overblown There's no way I could know what the most rigorous logic test in English is. I think it's the most rigorous *standardized logic test using no symbols and purely verbal communication.* (I should probably also add "that a person can reasonably access", because maybe there is a more rigorous one that a few hyperspecialized psychometricists have made but which only a few people in the world can proctor.) There are almost certainly much more rigorous logic tests which are verbal and nonstandardized — some professor has to have created one — and there are absolutely more rigorous logic tests that are nonverbal, but I know of no more rigorous standardized verbal logic test that requires no former background in logic, which is why I suggested it.
[deleted]
If I said "reasoning" then I'd agree with myself when accounting for how accessible it is and how vetted the questions are. Making a better reasoning test is certainly possible, but I don't know if it would be as accessible or if the questions would have the same test validity. e.g. for a lot of people, the standard adult IQ test (WAIS) is going to be *a lot* of money. There are also obscure tests that attempt to measure the higher ends of these abilities but they are often not vetted very well, or are given out to a small number of people. The LSAT is normed to about 100,000 people annually so the questions are quite reliable. If I said "logic" and nothing else then I was wrong and misspoke, sorry.
[deleted]
I can't re-record the video, but I will edit the description to be clearer if that helps. > You also seem highly focused on the (incorrect) inference that I’m talking about the higher end of ability when I talk about what the LSAT misses my apologies. in recommending it I had in mind the sort of person who *wants* to read this stuff, who is probably much more literate than e.g. the average american. specifically, I don't think it would be inappropriate for the people I've met through lesswrong. I agree that if this were a universal curriculum it would be a bad idea.
Thanks I felt the answers of that test were a bit wonky.

Have you heard about our Lord and Savior, Ludwig Wittgenstein?

[one of my favorite books](https://www.amazon.com/Wittgensteins-Poker-Ten-Minute-Argument-Philosophers/dp/0060936649) lesswrongs think phil is lame, they could never
Yeah, but they love an eccentric (probably on the spectrum) genius/disruptor from wealth with an engineering background. *The Duty of Genius* is probably digestable.
The Duty of Genius looks really good. I've wishlisted it fwiw. also, Wittgenstein served in actual infantry and was very loud IRL. having met LessWrong people IRL, they would kick Wittgenstein out after a meetup or two. he also would not be into the culty aspects of LessWrong. this accords with what I read in Wittgenstein's Poker: > W himself often felt that he had a bad influence on his students, People imitated his gestures, adopted his expressions, even wrote philosophy in a way that made use of his techniques – all, it seems, without understanding the point of his work. anyway [this new biography of Malcolm X](https://en.wikipedia.org/wiki/The_Dead_Are_Arising) also looks really cool. setting aside that Wittgenstein's cabin life was *substantially* more voluntary, I am a sucker for biographies where the subject isolates for a while and comes back a new person.
[deleted]
that is really interesting — what I've read about him so far *has* been quite flattering, so this will be a good read
Not to be too pedantic, yes, Wittgenstein attracted a cult of personality, but cult behaviour? I don’t know what that would be in reference to.
[deleted]
Yeah, that is a good point. I still think it is fairly far from cult or cultish stuff. He had a cult-like following and gave advice about choosing careers that did good in Wittgenstein's estimation. I'd label that as pretty standard narcissism.
[deleted]
I'd like them to be less impressed with counterintuitive speculation and jargon, maybe pass over in silence more often, but, yeah, they'll probably just adopt and exaggerate Witt's most annoying habits.

It’s probably worth abandoning the “rationality” terminology, which is marketing used to confuse the audience into thinking this stuff is some kind of secret material for the elect rather than just what every first year philosophy student has learnt since the invention of the university, and just calling it “logic and critical thinking.”

So I mentioned the virtues of reading history (and one could also say literature) in another comment, but when it comes to thinking about reason and logic I think it is important to read critiques of them and not just more instructive works. So like, Wittgenstein’a Philosophical Investigations. Might not actually be the last word on Philosophy but a lot of his concepts are a really useful reminder that a lot of ways people talk and think aren’t ‘rational’ and that is fine, actually. Rorty apparently takes this ball and runs with it but I haven’t read him.

> Rorty apparently takes this ball and runs with it but I haven't read him. man, there were a [ton of interesting books by Rorty \(and others\) at the Trinity University library](https://imgur.com/a/2ejR0rI) that I cannot check out until Jun 23 as I am criminally barred for trying to take a construction bollard I thought was discarded so I could use it for fire hydrant lifting practice. ([yes this is a real thing](https://www.instagram.com/p/ColJda8oCNh/).)

On your point on vitamin A supplementation, I was very quickly able to find that Give Well’s justification for VAS is based on studies looking at the effect of VAS on all-cause mortality. I don’t know how sound their conclusions are as I haven’t looked deeply at them at all, but I don’t think you gave a fair characterisation of the evidence supporting the pro-VAS position.

Also, I think you dismissal of veganism is pretty spurious tbh. I find it weird that you appeal to number of neurons as a sort of proxy for ability to suffer (or awareness of suffering?) like there’s a linear relationship between the two.

I have read these including the report, but my concerns are stuff like this, which was cited by one of the EA articles: [https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD008524.pub3/full](https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD008524.pub3/full) specifically I am concerned with the length of the intervention. the longest intervention was about two years. I don't doubt that this had significant impact on all-cause mortality during a 1-2 year period, but if the cause of the deficiency itself is serious malnutrition, should the concern not be a much longer monitoring period given that another nutrient deficiency could cause other serious problems. to put this another way, a friend of mine died of cystic fibrosis when he was 28; his life expectancy was much longer than this with the right treatment. or to use another analogy, if we jail a guy who is determined on murdering me for two years we've reduced my all cause mortality, but once he gets out that's another matter. there's a difference between curing something and stopping it from killing people within a two-year followup period. the EA article does not seem to address this concern, or if it does I missed it somewhere. my primary question is just "is this the result of a general lack of food that will cause nutrition problems inevitably and is this vitamin A treatment a very temporary band-aid". the EA report gives me a lot of data but does not address this question, or did not at least when I checked. about this: > I think you dismissal of veganism is pretty spurious tbh. I find it weird that you appeal to number of neurons as a sort of proxy for ability to suffer (or awareness of suffering?) like there's a linear relationship between the two. I agree with you; I specifically think there *isn't* a linear relationship and that the people who use neuronal count are being facile about consciousness. if I gave the impression that I think this I erred somehow.
Thanks for responding. I think I have a better understanding of your position now. I'm still not convinced it deserves to be an example of one of your biggest issues with the rationalist community, though. While you're right to say that there could be - and probably is - more than just a deficiency of vitamin A in the diets of these populations, I think you need to provide more evidence for a deficiency of similar magnitude waiting in the wings before discounting the efficacy of VAS alone. I'm sympathetic to the argument of "Just feed them", but that's obviously a much harder (and more expensive) solution when it could simply be that sources of vitamin A are scarce in these regions, especially when academics who study this professionally have identified vitamin A in particular as deficient and not a "lack of food" in general. It really could be similar to sailors and scurvy where it really is primarily a deficiency in a single vitamin in the diet of a population. > I agree with you; I specifically think there isn't a linear relationship and that the people who use neuronal count are being facile about consciousness. if I gave the impression that I think this I erred somehow. Okay, on rewatching the section I can see that that was you giving an example of a bad argument and not you giving the bad argument, though it really isn't that clear lol. I did find it a bit bizarre that the quality of your arguments suddenly fell off a cliff but it wouldn't be the first time I've seen something like that happen. I do find it strange though that you criticise them for anti-vegan arguments when from what I've seen rationalists on the whole are *so* much more vegan (and for the right reasons) than society as a whole - I've heard EAs derided for any focus they pay to non-human animals more than once. I guess it makes sense to include if you think of LWers as people seriously examining their own biases and striving to be both correct and ethical, but I guess I never took the label of "rationalist" seriously as much more than nerds (non-derogatory) self-identifying as part of a particular in-group that has an aesthetic of rationality.
> I think you need to provide more evidence for a deficiency of similar magnitude waiting in the wings before discounting the efficacy of VAS alone. I'm sympathetic to the argument of "Just feed them", but that's obviously a much harder (and more expensive) solution when it could simply be that sources of vitamin A are scarce in these regions, especially when academics who study this professionally have identified vitamin A in particular as deficient and not a "lack of food" in general. It really could be similar to sailors and scurvy where it really is primarily a deficiency in a single vitamin in the diet of a population. I actually don't disagree with this; I agree that a scurvy scenario is possible. rather, it's my concern of how EA treats the discussion we're having right now, and what EA *is and functions as*, i.e. the way to most effectively distribute large amounts of wealth and do the most good. if it turns out it's a scurvy-style scenario, then we've spent money very well. if it turns out it's broadly a need of food, then it turns out may have misused a lot of funds. (the SBF scandal, after all, was under the banner that this will eventually go to a good cause — so if we're going to be so consequentialist we'd *better be really sure* about that.) EA, as many see it, is the organization that is supposed to look through all of this and say "this is green-lighted to throw money at blindly"; if someone is doing the kind of thinking you're doing here when making their donation, then I don't think that's objectionable, in fact I think that's what people should be doing. it's the "we've vetted this, trust us, it's ok to throw money at this" part that I object to so much, when I'm really unsure about EA being justified in doing that.

I feel like it’s strange to assume that people in this particular subreddit can’t answer 10 basic LSAT reasoning questions but i digress

There is no trick to being less wrong about the world, and the only method to being less wrong about a particular subject is to study it seriously, or, short of that, study it less seriously through those who have studied it seriously. For example, if you’re interested in the psychology of making decisions, reading Kahneman may teach you a number of things about the subject (although it won’t make you really know the subject), while reading Steven Pinker, who was a scholar in psycholinguistics (although he is mostly famous as a pop-sci writer with strong opinions) on anything other than psycholinguistics is likely to make you know less about that subject (as would reading any work on subjects the author hasn’t actually studied, especially of the kind that’s gaining popularity in circles that favour opinion over scholarship).

No algorithm can replace education, and no amount of thinking can make you know more. Movements for knowing more by learning less are the intellectual equivalents of get-rich-quick schemes. Those schemes won’t make you rich, and “rationality” won’t educate you.

But there are people who read a ton who *are* really wrong. There is a meta level, in seeking to understand things like confirmation bias, and how it might color your education.
The best way to learn that meta level is to actually study something. By "actually study" I don't mean "read a lot" (although, depending on the subject, there might be a lot of reading involved, but that reading needs to be guided by an expert; as you progress, you become an expert yourself and know what to read). When you study something you also learn how people were wrong on that particular subject, and basic things like confirmation bias are often taught in most disciplines. BTW, all people are often wrong, but those who think there's a trick to not being wrong are wrong pretty much all the time; as you would put it, they're wrong on the meta level.
That's a good distinction to draw. Some areas might be a lot of reading, but to be less wrong about... Wood carving or archery, you need to actually work on those.
[deleted]
I have no clue what you're trying to say here, but it seems like you're trying pretty hard to say it, what with the mathematical "order these groups" approach -- so give it another shot?
[deleted]
Yeah, but I'm not sure why you think it's wrong to do so? Recognizing that you're only reading things that you already agree with, or retaining information that supports your existing views are skills, and arguably, kind of meta ones
[deleted]
And by "artificial", you mean "label a useful concept" (you even provided "critical thinking" as said label) I think you're saying that it's impossible to learn effectively without being good at identifying your own biases, so there's no point in bringing in such "meta" concepts. That's, frankly, nonsense; you're salty about LW et al making the discussion into a subculture and circlejerk, and thus reacting negatively to anything that reminds you of them. You can learn and apply critical thinking, and still end up misguided. One easy example would be to learn about something very narrow, and thus get a great understanding of something without the context to apply that understanding. Eg you can become an expert in pre-quantum physics, but if you refuse to at least acknowledge quantum, you're probably a crank. A crank who might be incredibly good at understanding most macro scale physics problems, though.
[deleted]
Propositionally knowing anything is kind of shit, though. Especially if you tell people about a new concept, and tell them to apply it, they will 100% just fuck it up. I don't think that means there's not a meta (meta meta meta meta) skill set that's applicable across disciplines.
"The first principle is that you must not fool yourself, and you are the easiest person to fool." (Richard Feynman) (I am agreeing with you.)
Has "meta" been, for lack of a better word, poisoned as a term for you / around here? Or does it have some very precise meaning in academic logic or philosophy that you don't like being co-opted?
[deleted]
My social circle (what's left of it anyways) has been using meta since the 90s or before. I seriously dgaf that siskind abuses terminology I find useful
one of the reasons I devoted such a huge portion of the podcast/audio to lesswrong rationalists who emphasize speed reading/listening is because they just don't want to read. which is to say, I agree with you. I've had the hardest time convincing this demographic that reading a lot and reading widely is important, so I put extra time to it in hopes that I can browbeat the point. inevitably, when I would suggest a book: "what's useful about this?" as if it's a power tool from home depot.
Saying that Rationalists don't like to study is like saying that vegans don't like eating meat. "Sophistry over expertise" is what Rationalism *is* (well, that and a power fantasy that intelligence confers, or should confer, a lot of power).
The only big name I know who actually puts in the work is gwern. Scott's psychiatry articles are great because *he is a psychiatrist*, but every time he does a "more than you wanted to know" on something out of his wheelhouse it reads like a research paper I'd turn in last minute. There were a few in the Austin LW group who read voraciously. It was about 5 people (out of ~115) and they weren't big names; I really enjoyed talking with them and resent that community leaders forced a choice between me or the subculture - which is what tends to happen if you're rationalist-canceled.
Putting in the work is the minimum bar for being well-informed, but it falls short of the bar for publishing worthwhile texts for general education. A person who puts in the work and studies a subject with great interest is called an amateur. It can be very interesting to talk with one on social media or in real life, but if you want to be less wrong about the world, go to the experts. Amateurs can provide a good introduction to a topic that piques your interest, and sometimes even produce good texts when expert material is nonexistent or not approachable. But the few Rationalist texts I've read are clearly written by people who are far from attaining the rank of amateur (and the same goes for Pinker's recent works).
no major disagreements here. my one quip is I don't think I'd call them amateurs — "journeyman" though archaic seems to peg that area well — but I've echoed the thrust of what you're saying on social media a lot. (e.g. [this substack writeup](https://alfredmacdonald.substack.com/p/alt-academia-can-only-go-so-far-and); I blocked someone on FB over saying "smart generalists" could get a PhD in a year and [this](https://i.imgur.com/g7J6z27.png) is downright "what the fuck" inducing.)
I *didn't* call them amateurs :) Amateurs are people who invest considerable time in learning a subject but aren't professionals and often aren't experts. It takes effort to become amateurs, and they're far from it, and since they object to study on principle, they have little hope of ever becoming amateurs.
That's because Rationalism is a cult!

Man, I’ve never seen a community of haters this smart and dedicated. Not disagreeing with y’all. I’ve no experience with EA or dog in this fight, just found all this through recent drama. Am amazed at the level and volume of discourse. If all this attention and brainpower were somehow directed at legitimately recognized accountability, EA accountability could be insanely good?

Not Haters, exactly. It's more similar to people who point out just how full of shit Andrew Sullivan or Jonathan Chait are. The fact that they're being covered in the mainstream press as reputable that makes them a target, when in fact it's all a pile of motivated reasoning.
Right...but the mainstream press feeds on drama and negativity. Not a rational barometer to direct attentional resources. Just messing with you, go after em. I think creating new religions should be battle tested, and creating one out of quantifying morality is dumb but might gain traction.