r/SneerClub archives
newest
bestest
longest
A couple of years old and mostly NSFW, but has some pretty good sneers on evolutionary psychology (https://medium.com/@tweetingmouse/the-truth-has-got-its-boots-on-what-the-evidence-says-about-mr-damores-google-memo-bc93c8b2fdb9)
47

eurgh, I remember when the damore memo came out, and a lot of people across the internet just assumed the poorly cited science was correct because it was stated authoritatively by a white dude.

It takes no time at all to spew pseudoscience and have it believed, but it takes a day or two to thoroughly debunk it, and in that time the false information flies across the world.

fantastic read!

this bit jumped out as perhaps being of particular interest to our community:

Suffice it to say that, as I said earlier, the people most likely to rate themselves as objective, benevolent observers of fellow humanity are also among the most likely to hold strong implicit biases, and some of the most likely to actually discriminate against others.

and indeed the adjacent paragraphs about Dr. Fine’s research

(worth noting I edited out two citations from that single sentence for formatting purposes)

Why emperor penguins are not cited rationalists as proof of anything is confusing to me?

Lobsters.

It also has some pretty good science from someone who studies sex behaviour in animals, if you’re into that sort of thing.

Biology can’t even define species, how could it define sex, much less gender.

Sex, gender and species can be useful heuristics as long as you don't take them as God-given absolute truths with God-given absolute boundaries.
Thanks, you summed up the speciation book I just finished in part of a sentence.
Which book? I find this stuff interesting because - well, mostly because it's fascinating - but also because the deeper you get into the realities of biology the more that clear logical categories which make Rationalism easy break down. Shit gets messy when you look at nature closely.
Tree Thinking by David Baum and Stacy Smith. From the book: > tree thinking runs counter to standard perceptions of evolution in popular culture. We do not know why it should be so, but we have learned from working with thousands of students that, without contrary training, people tend to have a one-dimensional and progressive view of evolution. We tend to tell evolution as a story with a beginning, a middle, and an end. Against that backdrop, phylogenetic trees are challenging; they are not linear but branching and fractal, with one beginning and many equally valid ends. Tree thinking is, in short, counterintuitive. [Here's a review](https://academic.oup.com/sysbio/article/62/4/634/1615731)
Thanks! I see in the review that they mention hybridization and horizontal gene transfer as the next step in understanding, which is another way that nature makes things even messier.
It's a mess! nature is such a shit show. I did my PhD on a "hybridizing cryptic species complex", ie a set of "species" that are indistinguishable morphologically and which (sometimes!) hybridize. The thrust of the research was whether the lack of mating barriers might actually be selected for, allowing the species to access hybridization partners when appropriate, while maintaining "species purity" when appropriate under other conditions. We didn't really solve that, like I said it's a mess. A common question was whether, or on what basis, we were justified in calling these groups "species". We were just like [eehhhh, you know, the thing...](https://academic.oup.com/view-large/112728239)
Ah, so you're waaaay deeper into this than I've ever been! So they're all basically identical (except in some non-obvious way?), but usually they don't mate, but occasionally they do, and usually there's selection against hybridization because it leads to lower fitness, but maybe sometimes there's selection for hybridization to keep the genes flowing around, but nobody's sure? Is that approximately close?
Yeah exactly, "who knows" is the state of the art. Some populations split for whatever reason and immediately evolve powerful barriers against hybridization. Others, like my study group, stay half-assed distinct (genetically) for many generations, but then randomly hybridize like mad and the formerly distinct populations merge into one big breeding group. But, later, and very improbably, the original groups re-emerge from that mess, with new genes, but species intact! Why? How? The answer is that actually the question is wrong: this is just what life is like and we should not be surprised when populations fail to conform to our rigid categories (or any categories)
This might be a dumb question, but: How do you tell that the groups which re-emerge are the original ones, with species intact?
Genetic similarity. They again cluster with the original species, although maybe it's wrong to say that they "are" the original group. The original groups are gone with time! Natural populations only exist in the present. That book above is great for wrapping your head around this concept that grouping populations that don't coexist spatially / temporily is a human act.
Ah, okay, makes sense.

I had to laugh at how this piece fucks up every time it brings up inferential statistics. - definition of a p value - no correlation between effect size and p value (ok, this is just bad writing, not in itself revealing of a conceptual misunderstanding) - the interpretation of the reported results of the meta analysis

All wrong. Like clockwork. And it’s all so confidently stated, in this happily didactic tone.

Granted, inferential statistics are weird. Many smart people mess up here.

What's wrong with the p-value definition? Isn't p-value the chance that you get a result at least as extreme as the one you got even though the null hypothesis is true? Is it that she didn't spell out "at least as" in her example?
I’m not even going to bother touching the p-value definition issue except to say you’re more or less in the right, although it’s important to remember that the “null hypothesis” is a construct in itself Anyway, this from their post history on /r/slatestarcodex is fucking hilarious: > Why are you surprised? Half the internet is people being mad at the outgroup. Hell, **even on this very subreddit** sometimes people are mad at the outgroup. Imagine thinking it’d be unusual for /r/slatestarcodex to have partisanship
Your definition is correct. A bit more technical, p is the long-run frequency of test statistics at least as extreme as ours if the null hypothesis is true, assuming all assumptions hold. But the article states: >The first number, the p-value, is the the chance that we just happened to get a whole bunch of shorter women and a whole bunch of taller men, and this data set is misleading us from the Real Truth that women and men are on average of equal height? Those odds are so small as to be infinitesimal — in this case, less than one in ten thousand, or roughly three times less likely than a lightning strike. I parse this as: >the p-value [...] is the the chance that we just happened to get a whole bunch of shorter women and a whole bunch of taller men "P is the probability of our result being due to chance", or something like that. >the p-value [...] is the the chance that [...] this data set is misleading us from the Real Truth that women and men are on average of equal height [...] Those odds are so small as to be infinitesimal "P is the probability of us making an error when rejecting the null hypothesis", or something like that. The "in this case" IMO clarifies the p value here is interpreted as the odds of making a Type I error here. But maybe you have a different reading of the quoted paragraph. There is a handy list of p-value misconceptions in [this article]( http://www.ohri.ca/newsroom/seminars/SeminarUploads/1829%5CSuggested%20Reading%20-%20Nov%203,%202014.pdf). But really, I don't want to be too harsh on the author for these mistakes - p values are weird, most scientists get them wrong. I think the misreading of the results section of meta-analysis part 1 is a bit more serious though.
> I parse this as: > > the p-value [...] is the the chance that we just happened to get a whole bunch of shorter women and a whole bunch of taller men > "P is the probability of our result being due to chance", or something like that. It's uncharitable of you to not finish the sentence that you're quoting. :-) I read it as: > the chance that we just happened to get the chance that we got... > a whole bunch of shorter women and a whole bunch of taller men ...a result this extreme... > and this data set is misleading us from the Real Truth that women and men are on average of equal height ...given that the null hypothesis is true. If anything, it's Misconception #6 in the article you quoted.
I don’t know how exactly you interpret her, you’d have to spell it out - but I suspect it’s false still. “The chance that the particular result we got we got even though the null is true” would be an incorrect interpretation. Or what do you mean
I spelled it out as clearly as I could in the comment you're replying to - I read what she said as, "p-value is the chance that we got a result this extreme given that null hypothesis is true," which doesn't quite get it right because it should be, as I understand it, "the chance that we got a result at least this extreme given that the null hypothesis is true" - but I think that's a bit of a sideshow beside what she talked about next, which is that effect sizes are generally a more important thing to focus on. Her main point in that section, and I think it was a good one, was about how the effect sizes are small when you look at personality differences between men and women, and sometimes not even in the "correct" direction in regards to the case that Damore was trying to build. And those differences are small in spite of the ways (which she talked about in other sections) that adults differently respond even to male and female infants. From the day we're born people are trying to shape us into "boys will be boys" and "girls will be girls", and even with that the effect sizes of the personality differences end up being pretty small. My own evidence tends to be more anecdotal. I think about the old ladies I know who are really into sudoko, and I figure that they could've ended up as software developers if our stereotypes were different.
I really don't get how you can take that interpretation from what she is saying. How are her words connected to yours? I don't see it. This: >The first number, the p-value, is the the chance that we just happened to get a whole bunch of shorter women and a whole bunch of taller men, and **this data set is misleading us from the Real Truth that women and men are on average of equal height**? **Those odds** are so small as to be infinitesimal — in this case, less than one in ten thousand, or roughly three times less likely than a lightning strike. fairly clearly talks about p values giving the odds of rejecting a true null I would say. That's also how [this poster](https://www.reddit.com/r/SneerClub/comments/m8syi9/a_couple_of_years_old_and_mostly_nsfw_but_has/grlqh49?utm_source=share&utm_medium=web2x&context=3) was interpreting it. Sure, the first part *can* be interpreted like you said (although that's not the most plausible reading), but then comes a lot of ... other stuff. Remember, the p value is *conditional on H0 being true*. It is *not* the odds of being mislead when rejecting a null hypothesis based on the data. It's a counterfactual description of the data, not the error bounds when making a decision to reject. But again, it's ok, inferential stats are complicated, people get them wrong. > effect sizes Effect sizes are descriptive statistics, not inferential - that's why I spelled out in my original comment that it's the inferential stats she's getting wrong. I don't want to get into what I think of her interpretation of descriptive stats, and I damn well do not want to start debating the merits of Damore's actual points here, one way or another, but she is indeed correct to *consider* descriptive stats at least, instead of just looking at p values.
I understand how I'm getting it from what she wrote, and I explained it as clearly as I could given the effort I feel like putting into it, so it seems like we're not going to get much further on this one. :-)
Fair I guess.
And what's wrong with that? If it's just because it's not saying "at least as extreme as the result we obtained" or "assuming all assumptions are true", you're being extremely pedantic. It's a good summary. The most grievous mistake people make when they intepret p-values is that it's the probability that the null hypothesis is true/false, which the article did not make.
The author is talking about the odds of being wrong when rejecting a hypothesis, and p values are not odds of hypotheses.
Ok, how about you also apply some of that pedantry to your buddies over at r/TheMotte when they talk about heritability of IQ.
This isn’t pedantry. It’s the primary difference between the two main schools of probability and statistics, the frequentist school (p(D|H)) and the Bayesian school (p(H|D)). There are countless scientific articles explaining why it’s so bad to get this distinction wrong. I linked to one earlier. But, it’s ok. She wrote this piece, she made a mistake about stats that actually *most* scientists make, I sneered at it. It’s ok. I’ve also made statistical mistakes. Statistics is hard.
I guess we just don't agree on how to interpet her words. > The first number, the p-value, is the the chance that we just happened to get a whole bunch of shorter women and a whole bunch of taller men, and this data set is misleading us from the Real Truth that women and men are on average of equal height IF you assume the Real Truth is the null hypothesis, then the p-value IS indeed the probability of incorrectly rejecting it, i.e. of making a Type 1 Error. She is not misusing frequentist statistics.
> If you assume the Real Truth is the null hypothesis, then the p-value IS indeed the probability of incorrectly rejecting it, i.e. of making a Type 1 Error. This very sentence is listed as a “p value fallacy” in the article I linked to earlier (and every other article about how to interpret p values).
Ok. It seems that I incorrectly understood p-value to actually mean the significance level alpha.
Right. You're welcome!
Thanks.
You’re lying In the original comment you said not only you “had to laugh” at the given definition of p-value, but made an incredibly harsh comment about that definition Have some respect for yourself and don’t pretend you didn’t fuck up
Well it’s funny, cause [she’s wrong](https://reddit.com/r/SneerClub/comments/m8syi9/_/grkxf9p/?context=1), and you being wrong about it too makes it even funnier. I mean I’m not saying she’s terrible or whatever, it’s ok, this stuff is complicated. But it’s funny!
I don’t particularly care if the linked poster is wrong, I don’t read every post in this sub to the full when I’m not interested and I limited myself in this particular case to what *you* said You lied by saying you weren’t being the arsehole when someone said you’d been an arsehole: own it People with the right kind of self respect acknowledge when they’ve said something provably wrong about themselves or things they’ve said
Ironically, I just found a note from the author a day after the piece was published saying that there was a problem with their p-value definition in the piece that they swore they were going to get around to fixing. In context it read to me like "I'm exhausted an I may or may not get around to fixing this." So you're not wrong to have spotted it, and the author did, too.
Yeah it happens. I’m sure I’ve fucked up my p values on multiple occasions. This is very counterintuitive.