r/SneerClub archives
newest
bestest
longest
Slatescott is even getting lazy about justifying getting lazy with justifying his biases (https://astralcodexten.substack.com/p/selection-bias-is-a-fact-of-life)
56

I just…this is a really half-assed attempt to explain why he doesn’t give a shit about selection biases?

The real studies by professional scientists usually use Psych 101 students at the professional scientists’ university. Or sometimes they will put up a flyer on a bulletin board in town, saying “Earn 0 By Participating In A Study!” in which case their population will be selected for people who want 0 (poor people, bored people, etc).

Well yes, a lot of psych studies are bullshit for that exact reason. Great point! It’s really hard to get a good, random sample for a lot of research into small populations and sometimes we have to accept those limitations.

But there’s no excuse to take a Twitter poll as representative of anything.

Selection bias is fine-ish if you’re trying to do something like test a correlation. Does eating bananas make people smarter because something something potassium. Get a bunch of Psych 101 undergrads, test their IQs, and ask them how many bananas they eat per day

This is a horrible example. Even massive nutritional epidemiology studies find a ton of spurious correlations, which why we constantly see articles about why wine/meat/coffee is good/bad for you.

Not even gonna touch these guys’ association with Aella’s “research.”

>Well yes, a lot of psych studies are bullshit for that exact reason Entire books have been written about the problem of generalizing psych study results when your usual sample population is pretty much exclusively American college students. But he wouldn't be a rationalist if he'd actually read a book by someone who knows what they're talking about.
They have the insights of a 14 year old who watches the History channel.
> The real studies by professional scientists usually use Psych 101 students at the professional scientists’ university. Has this dude done a real study or does he sit around theory crafting how studies (particularly psyche studies) are done? In the "before times" data would be gathered by going to research firms specialized in it and forming a plan to gather the data. A lot of times whole systems of reporting would be established. Who lobbied for the government to keep good statistics on all manner of things? Well, people studying those fields (for whatever reasons), of course. The actual way it's done is a student, trained in the scientific process, will go to Amazon Mechanical Turk and poll a few thousand people for 10 to 50 cents a pop (they will sometimes get some tiny amount of grant money to do this but I know of students who paid out of pocket). This gives them a much larger pool of people to poll from. It works wonders from things like "visual and auditory" and most of modern AI advances were trained on those workers. A psyche major doesn't have the time to go poll anxious students at the college, sure some may do it for some liberal arts thing or some such, but every single study with a small sample size should be always taken with a grain of salt and students should be discouraged from doing them when we have the tools to do broad studies that hit a wide swath of demographics. But I guess that's the wider point, he's arguing *for* these shit studies with small sample sizes and weak diversity.
On top of all the other problems, asking people to self-report their eating habits is ... [not a simple procedure](https://epi.grants.cancer.gov/dietary-assessment/Chapter%201_Coulston.pdf). People might report that they ate less of something if they feel they shouldn't have been eating it. (And this can be intensified by having family members present at the interview, or by interviewing in a clinic.) They might forget that they had toast with their breakfast, or they might not realize that butter on toast counts when the interviewer is asking about dairy products. The nutritional profile of the macaroni and cheese they ate might not match that in the scientists' database, because there are many ways to make mac and cheese. You might think it's easy to count bananas, but what if the subject bought a fruit smoothie — how many unit bananas went into it?
Yeah that's one point. Good studies make the effort to go beyond simple self reporting or at least recognize the issues and not make broad claims.

lol, of course it’s about aella’s twitter polls that she writes up as “surveys”.

his argument is that actual psych is about this level of quality. Right, the field that kicked off the replication crisis.

per a commenter:

“Beware the man of one study (unless it’s Aella, then it’s fine)”

Perhaps their response would be that Aella is no man, or something.

The comments are a goldmine.

How is he so confused about the difference between a randomly controlled trial and a survey?

Sorry, when you say “control for”, are you talking about between two groups in the same study, or between the study population and the general population, to account for selection bias? I see the first all the time, but almost never the second.

I’m having trouble getting past this. That seems like a pretty weird thing to lie about. I don’t see how this can possibly be true for him. I mean, there is no doubt he has read a lot of studies, right? I know I have seen that plenty and I have never studied or worked in a field where I would read a bunch of studies.

Maybe some sort of bias made him unable to recall those studies. He should work on overcoming that!

Does he seriously don't know the difference between a RCT and a survey?

Matt Levine rose to fame with a pseudo-casual writing style where he pretends to not really know what’s going on as he explains various phenomenon. He writes with a very casual and familiar style where he acts surprised by various outcomes to bring himself down to the reader’s level.

Scott’s blog seems to have tried to adopt the same casual writing vibe, but he hasn’t really let go of the explanatory writing style. The combination comes off as overconfidence combined with an obvious lack of research and structure. It feels like the broscience of the rationality community.

Honestly, I’m kind of glad he’s gotten lazy, it’s made him much easier to ignore.

❌ Overcoming Bias

✅ Surrendering to Bias

he’s just been phoning it in for a long time now. His substack income is comfortable and yes as you point out, he’s gotten lazy

You dont need real statistics when you have the power of bayes and twitter polls!

Lol the degradation of Rationalist science continies. What a joke.

Funny. I saw Pinker had retweeted the original post. I read it and thought ‘huh, might have a point’. Then today it struck me that there is massive selection bias in his samples, and now I see this! Classic.

I’ll probably get downvoted for this, since I’m supposed to be sneering, but I actually think Scott is making a good point in this post. 🤷

I’d view it less as “trust the studies, they’re perfect because selection bias doesn’t matter!” and more like “take everything with a healthy dose of skepticism - because all science has imperfections - but don’t throw out all of your data either.” Certainly some studies with similar methodologies have turned out to be true and useful.

It is weird that all the public EAs/rationalists seem to know each other though (eg., his mention of Aella).

It's not like he is addressing an audience that is prone to being overly biased in favor of mainstream academia though, and he is favorably singling out Aella, which is a whole thing in itself as regards to sloppy surveys. He even gets called out in the comments for the post being seemingly more about in-group protectionism rather than epistemic hygiene.
You are falling into the classical trap, where people take a normal point 'research has biasses' or 'people are dumping too much toxic chemicals into the water' and then extrapolate it to something crazy 'aella is fine' or 'demonocrats are intentionally turning the frogs gay' and then you only focus on first part and not the rest of the argument. This will lead you down a far right path. Andrew Tate types also love this trick. 'Large group of men are lonely' -> '*ferengy voice* females'. Dont ignore context. Amd dont worry about the Rationalists all knowing each other, worry about all the scientific racists and Rationalists knowing each other.
For those who don't know rationalist deep lore, > 'people are dumping too much toxic chemicals into the water' and then extrapolate it to something crazy ... 'demonocrats are intentionally turning the frogs gay' and then you only focus on first part and not the rest of the argument. Is exactly what Alexander did [here](https://web.archive.org/web/20190206072847/https://slatestarcodex.com/2019/02/04/respectability-cascades/): > After college I went about a decade without thinking about it. Then people started making fun of Alex Jones’ CHEMICALZ R TURNING TEH FROGZ GAY!!! shtick. I innocently said that this was definitely happening and definitely deserved our concern, and discovered that this was no longer an acceptable thing to talk about in the Year Of Our Lord Two Thousand And Whatever. Okay. Lesson learned. I just find it so telling that Scott thinks he failed to convinced people because the idea is unacceptable, not because he just argued poorly or is personally unpleasant or misread a social situation. Prior to that paragraph, he links to an [NYTimes opinion piece](https://web.archive.org/web/20190428221215/https://www.nytimes.com/2009/06/28/opinion/28kristof.html) as evidence for the effects of endocrine disruptors. Incidentally, it never mentions gay frogs (or frogs at all).
lol, I actually was just making a link to Rationalists and a bad thing Alex J does. I had totally forgotten about this.
I appreciate that one can describe a bad approach to discussion and immediately find a post by rationality blog SSC that exemplifies it.
Re last paragraph I’m not sure there are two separate groups anymore.
[deleted]
The principle of charity is also something Scooter never applies to anyone else, so I fail to see why we should apply it to him
I thought he applied the principle of charity to eugenicists or something
As I understand it, the principle of charity is an expectation that others must apply to the written output of rationalists, not something that rationalists should ever be expected to apply to others.
Fair
So, the problem here is that there are a lot of academic discussions of whether and how large of a problem it is that psych studies over-select from [white, educated, industrialized, rich, democratic](https://www.apa.org/monitor/2010/05/weird) populations (particularly college undergrads). The reaction is not “oh hey I guess this is a non-issue because it’s all just correlations so who gives a shit”; the reaction has been for lots of people to recognize the methodological limitation and attempt to discuss and engage with it, even though it is a fairly entrenched element of the methodology. The Scott and Bailey here is that he is more invested in defending his and Aella’s bullshit than in critically examining the issue, so instead of modus tollens on the problem, he is just going to cast aspersions at the “Cathedral”, and then *defend* the outsider autodidact research practices (since no one’s is *free* of sin), and then he can have folks like you defend him on the trivial point a that obviously no one is methodologically perfect while not actually defending the complete garbage approach of using *heavily* selection biased twitter polls, which is what he actually is claiming to do in the post.
(As usual if you just assume he is more sympathetic to Moldbug bullshit, as the leaked emails revealed him to be, and his own behavior continually re-affirms, and his Kolmogorov complicity post suggests, etc. you have the decoder ring for what he is up to: he is a not especially stealthed neo-reactionary, or in his garbage parlance he thinks the world would benefit from shifting heavily in that direction, because the wokes are out of control)
> Scott and Bailey This is art
What's important though is that a student survey with a small dataset isn't inherently bad, it at minimum builds the process in the researcher on how to do the thing. What's important is that we get more normalized scale population data to actually come to a generalization, and recognize that the smaller set is only a step. And as you note with WEIRD that is exactly what the researchers are doing as they try to solve the replication crisis. It's guys like Scott that took the grain of salt studies to heart while the actual research community wants to fix it. Of course if you were to ask Scott et al if a more diverse population wide demographic poll is better they would naturally agree (he implies as much in his post, disputing that such a study exists). And therenin lies the rub. All this bullshit peddling with no actual stance. Here I'll take a stance: Twitter polls are bullshit.
it's not inherently bad. it's bad because a lot of people generalize the conclusions. instead of saying "my twitter followers think X about gender, etc." it's implied that this twitter poll tracks something interesting that generalizes. his point starts fine, but 1) implying the twitter polls of that sort are not a problematic example of selection bias is obviously wrong 2) justifying this by comparing it to studies being published with selection bias is funny, because a lot of published studies are trash and criticized for that same reason. obviously convenience samples exist and can be informative, but there is a reason why every good introductory text on statistics will offer the psych undergrad example when covering the issue of selection bias
It might have been redeemable if it were entitled "Selection Bias Is A Fact Of Life, **AND** An Excuse For Rejecting Internet Surveys"
no, I downvoted you for misrepresenting the post in your defence. It's about defending Aella on spurious grounds. That's its purpose.
I don’t think it’s weird if they know of each other. Pretty much everyone here including you and me knows both of them for the same reason, we spend time in online spaces where they inevitably come up.