r/SneerClub archives
newest
bestest
longest
52

why can’t this alien pick an Earth science, any Earth science, apply whatever he is talking about and make a valid contribution. a twitter user should ask him.

This, to my view, should be the last word on Yud. He cannot *do* anything in the field he claims expertise in, so it's deeply unclear to me why anyone should give a shit about his claims to be an expert.
He's actively harming science, as he is luring talented people to work on his bullshit. That's not just a vague claim, some of the researchers listed in intelligence dot org had real research potential and seem to have done nothing in five years.
Worse still, they're advocating to not publish papers because they're "infohazards", so even if those researchers did anything worthwhile nobody would hear of it.
"I can't tell you what I'm working on because it might end the world" is such a hilariously brazen way of disguising the fact that you aren't doing anything productive.
I'm going to try that on my boss.
"A company for carrying out an undertaking of great advantage, but nobody to know what it is" [And we know how well that went.](https://en.wikipedia.org/wiki/South_Sea_Company)
I think that's a fair point, but I also think that if you look at "places that are attracting research talent to do stuff with low social value", I would view "basically all of finance" as a bigger problem than "Yud's weird cult". And that, as far as social harms of Yud's weird cult go, "a few researchers aren't being as productive as they could be" is also one of the smaller ones.
> I think that's a fair point, but I also think that if you look at "places that are attracting research talent to do stuff with low social value", I would view "basically all of finance" as a bigger problem than "Yud's weird cult". And also, "Bullshit making it hard to become/remain a researcher." There's so much nonsense involved in getting an academic post, getting resources for your research, etc. If one wants more researchers to be as productive as they can be, a handful of people writing stuff for MIRI is the most irrelevant concern I've ever seen.
I have moved country twice, spent long periods un(der)employed doing gig work, had many interviews and applied for countless jobs (each of which involves a lot of work). I'm about to start yet another short term contract at a university. The worst part is, I'm actually really excited and grateful for this opportunity. In the end it's probably not worth it and I will eventually get the message and go into software development. I've been able to live rent free at my parents for a month or two between moves, and store a lot of stuff there, and recently they've even given me more direct financial support. Without those things, none of it would be possible. Academia is a total mess.
You're dead right. I got a scholarship to do my PhD and I happened to know one of the other students who won the same scholarship. He turned it down in favour of a graduate scheme with a gambling company (which is the finance industry, not just in a snarky way, he's essentially a quant). Just an anecdote but this really happens. My stipend was approximately half what he made, after tax.
Or didn't want to run the gamut of post docs for a reach at the brass ring of a TT position, worked in startups doing ML, kept their toe in things by doing nonsense with MIRI.
Sorry if stating the obvious, but it's because, like all the IDW characters, the goal is to be an internet celebrity for a dummy audience. They don't know that he doesn't know what he's doing. Making contributions is far from the point.
What's IDW?
see /r/EnoughIDWspam

LeCun really got under his skin, lol.

Yeah holy shit. This is way dumber than usual, that first "bullet point" is nigh incomprehensible. Also "subtweet of [retweet]" j f c
“I don’t have enough of an audience to actually subtweet, so I’m going to announce that I’m subtweeting to pretend instead.”
I know the LeCuns so all of this coming full circle has been an insane ride for me.

Of course the guy who has never run a scientific experiment, never subjects his results to peer review, never stepped foot in academia, and has never demonstrated proficiency in science or statistics is on Twitter telling people how to reform science and statistics.

Do you mean it is not enough to learn LaTeX and put your blog entries on arXiv?
Related sneer, the whole fad of "white papers" popularised by crypto currency pushers, where you make absolute garbage seem impressive by typesetting it in latex
no you use MS Word and change the font to Computer Modern like a *real* scientist
\*flashbacks to my algebra professor in grad school who wrote all his cryptography papers in Word\* \*expertly contained internal screaming\*
Ugh, Computer Modern is such an ugly font. As a LaTeX person, I always use Times Roman or Palatino.
well you won't get very far in science with *that* sort of thinking
So that's why I'm prematurely emeritus!
your doctor can help you with that
Publishing blog posts is clearly enough to prove his genius.

Wait a minute, does this mean Yud pays for Twitter Blue?

how's he gonna post at essay length without it huh
It absolutely does! Which means the correct engagement strategy is to block and move on, because no one who gives money to Elon Musk is worth spending your time on.
The correct engagement strategy is to add [twitter.com](https://twitter.com) to your adblocker's filter list.

The most consistent thing about people with these types of solutions is that they think the problems are actually engineering problems.

They’re social problems. That’s why these things aren’t “solved”. Not because all we needed was a super genius to tell us the part we already know.

No surprise then that it's the same group of people who were really into cryptocurrency, which was of course a textbook case of "if we just use this fancy technology it'll solve all the social problems somehow"...
There is a relevant xkcd about this. Oddly enough.

I hate prediction markets i hate prediction markets i hate prediction markets

Also, all scientific papers and journals should be accessible to all people for free. Screw paywalls, long live scihub.

From the journal to the pdf, science shall be free.

Is there overview why they think prediction markets are so useful? Is it just a “skin in the game” type thing?
It's an empirical finding. The canonical example is orange juice futures markets doing a better job predicting the weather in Florida than the National Weather Service. https://sci-hubtw.hkvisa.net/10.2307/549 edit: I got permabanned for this lol
I mean, it might occasionally be a real thing for very specific use cases, but absolutely not fully generalizable in principle
Why is it not generalizable? It's a very common finding of economics and really shouldn't be surprising. Hundreds of billions of dollars is spent to predict stock markets slightly better. If (real) prediction markets were legal, the same would exist, and the markets would be just as accurate. It seems like this should be obvious- if there is a chance to make a lot of money, firms will exist to make that money. Markets defeat election analytics https://maximumtruth.substack.com/p/deep-dive-on-predicting-elections Markets predict the cause of the challenger crash before NASA https://papers.ssrn.com/sol3/papers.cfm?abstract_id=141971 Just a couple examples, but this is a generally understood phenomenon in economics.
"Real prediction markets" are illegal?
Yes, you need to come to an understanding with the Commodity Futures Trading Commission in order to set up a legal prediction market, and they always impose conditions like very small limits on the maximum bet. Or you can be like Polymarket and operate with shady crypto and not in the United States.
Sounds like we have a different use case of "real", I think. Hey I'm not going to read that article because it sounds horrific but if you wanted to give me the TLDR on how "randos correctly bet on the Challenger exploding" I'll think about it a bit more.
To be clear, when I say "real" I mean real like the stock market. So no maximum bets and low transaction costs. If you have a maximum of $800 bet like PredictIt, well nobody is going to start an organization based off outperforming the market. If you take 15% of the winnings, sharks are not going to come in. These factors make the market overwhelmingly less useful as a predictor. The TL:DR from memory is that the challenger was a joint project between several contractors. Their stock prices went down a bit immediately, but then one of them went down much more after a few hours. The one who went down much more was the company who was found, months later, to be responsible. So hedge funds did the research and figured out the likely culprit and killed the stock.
I guess we just disagree that McDonald's is a terrible sandwich maker/employer? I don't think #markets are correct in matters other than truly arbitrary stock prices.
Whether you like McDonald's or not, they make a lot of money, and their stock price reflects that. You can not think that markets are useful but you're going up against a massive volume of empirical evidence and economics. Even ignoring empirics, this just seems true as a syllogism. If people like making money, and some people are able to spend billions of dollars to predict things better, those billions will probably make them betting at predicting things than you and me.
Its an empirically derived and easily demonstrable fact that McDonald's makes subpar "food" and fosters assaults; I wasn't referencing my personal preference. Pretty sure economists are aware of this. Herpes is "useful", thats a pretty wide net. The point is that "money = non-money" is kind of a might is right type apologia. We couldn't help but pollute heavily since WW2, it was always ***the best idea***.
None of this has any relation to the ability of markets to predict future outcomes? I don't think any economist would ever say that markets always provide socially efficient outcomes, but that's not what we're discussing. We're discussing whether they can predict things.
Uh, or maybe it was just a coincidence and those hedge fund managers didn't manage to figure out what made it crash? Stick to the orange juice example lol
As a phenomenon, absolutely. As a core principle in and primary component of carrying out science, problematic at the very least. Any market has the potential to create distortions, even those based on reputation alone. An election results market does not (or at least SHOULD not) directly have any feedback mechanisms that would take the market outcomes and use them to impact the results of the election being predicted or ACCEPTED, or future elections. This is a good thing, as if it DID, people absolutely try to even more actively “game the system,” even to the point of illegality. The tail would wag the dog. The horse would push the cart. Here, BIG YUD wants to make prediction markets a key discriminatory factor in determining the value of a given study/result, which would have a much more direct feedback influence on studies conducted in the future- potentially distorting the science being done in ways that will be difficult to predict ahead of time.
The volume of smart (or nonideological) money is overwhelmingly larger than whatever amount of "ideological" money exists and would seek to manipulate the markets to influence some outcome.
Exxon and the rest of the energy industry spent a huge amount of money on spreading disinformation about global warming, but where thankfully unable to hijack academia. With a prediction market that would have another lever with which to distort the scientific process. Within politically controversial fields and fields tied to valuable industry “Ideological” money would be huge. Maybe eventually the truth wins out, but in the meantime interested industries distort the markets. And that’s assuming the industries don’t hijack the market resolution process. Imagine a bunch of Trump appointees on the market resolution board intervening to resolve the market in the industry’s favor.
Any amount of money that Exxon and the other oil companies can gin up to manipulate a prediction market is utterly dwarfed by the trillions and trillions of dollars managed by impersonal funds who just want to make money. If those funds see that a prediction market is below what they think it "should" be, they would eat up that value, and Exxon's money would be wasted.
Those funds are also diversified across the entire economy, they're not going to go in big on some science bet. Hell, most funds are just following simple market indexes. Every single market crash is evidence against you, anyways.
Here's what I must be missing: why are we assuming that there's a large volume of "smart" money that wants to bet it on the likelihood that specific scientific research will have specific outcomes? It strikes me as a thing where the only people who would have the interest and expertise to actually place bets on it are the researchers themselves, in which case, why would I bet against them?
People like having money. People are able to use money in order to employ teams of people and algorithms that allow them to predict things well. They can then use these prediction tools to gain high confidence about the probability of markets. They can use this high confidence to invest in markets. I seriously disagree with the idea that only researchers would invest. Consider the size of the finance industry. And you can invest in a very limited universe of things in comparison to a world of prediction markets. Even if, as you say, only researchers themselves would invest in these markets this would still be beneficial. Researchers like to make money too, and they could make money by being right. The more wrong the market is, the more money they can make being right. >in which case, why would I bet against them? The goal is to predict things well, not to have as much volume as possible.
> People are able to use money in order to employ teams of people and algorithms that allow them to predict things well. > They can then use these prediction tools to gain high confidence about the probability of markets. > They can use this high confidence to invest in markets. This is a tautology that doesn't answer the question. If there's nobody betting on these markets, there's no money to be made - so why would I put my own money, or the money of my bank or whatever, into it? And as always, I'll point to the obvious counterexample of sports betting - a huge enterprise of people trying to make accurate predictions, with literally billions of dollars in it - is the finance industry employing teams of people and algorithms to more efficiently bet on baseball games? Because if they are, I'm not seeing it - and that's a betting market that does actually have tons of money to be made if you can make accurate predictions! What I am seeing with sports betting though, is thousands of years of the existence of those markets having a negative influence on the thing they're trying to predict because it turns out: if you can make a lot of money by predicting some outcome, the best way to do that is to cheat so that you can know for sure what the outcome is ahead of time.
stock markets aren't a great example since stock predictions are as much about predictions about how others will predict a stock price to move, as in a Keynesian beauty contest. there's definitely the "wisdom of crowds" phenomena but econ has had a big change since the 2008-2009 financial crisis and focuses more on showing inefficiencies and biases using behavioral economics.
“I predict that $person_name will die within 6 months by gunshot. 😉”
Ah yes the old 'we tech bros invented an assassination market as a theoretical exercise, it allows you to get away with murder' to which people reacted with a 'you did what?'
Prediction markets for markets makes no sense
[deleted]
A futures market is just a prediction market on the price of a commodity.
[deleted]
Please enlighten me on the difference.
Actual goods that are ultimately used and thus provide a grounding in reality. A prediction market grounding in reality relies on the quality of its resolution mechanism. I.e. if Exxon mobile gets a Republican appointee on the resolving committee for a market of global warming research questions.
If you buy futures of oil at $80 due in 6 months, this is functionally identical, in every way that matters, to a prediction market that says "Will oil be priced higher than $80 in 6 months?"
No it isn’t. Your counterparty in the prediction market could spike the price against you in an even more fictitious way than in the futures market (which is still vulnerable to some shenanigans along those lines and requires strong regulation), and then the market resolution committee could decide against. For even more abstract questions, market resolution is even trickier. I’m actually open to the possibility that prediction markets not be entirely useless with sufficiently resolvable questions that can be researched, but I don’t think it would add a lot of value compared to a panel of well incentivized experts even in those cases.
And at the end of the day in a futures market, somebody might be forced to take delivery of several tons of nickel. Not so in a prediction market.
I don't think anyone would be betting in prediction markets that are inherently subjective or subject to easy manipulation. There is a simple, underlying truth to plenty of questions that could have a prediction market.
Futures markets have a variety of participants that aren’t taking a bet on what the future will be as much as they’re hedging against negative outcomes. The classic example is the wheat farmer and the baker. The baker wants low wheat prices and the farmer the opposite. The two will trade in wheat futures with each other (plus a market maker in the middle typically) to pre-sell a certain portion of the harvest at a price acceptable to both parties. If the harvest is unusual good or bad, both parties have a portion of their risk (and upside) removed via futures. Neither the farmer nor the baker are truly even trying to predict the future here. Rather they are attempting to hedge against possible risk of negative events outside their control. That this can sometimes be used to predict certain numerical outcomes like average temperatures is interesting, but not necessarily generalizable. Also, NOAA’s reports have gotten *much* better since the 1980s.
Prediction markets can equally serve as a hedge. And typically in futures markets, entities who are hedging risk are overwhelmingly outnumbered by entities who want to make money.
> Prediction markets can equally serve as a hedge. Real world experience says no. Futures markets are obviously extremely useful, but it’s little known that various exchanges have played around with instruments based on the weather (average temps, rainfall, etc.), which have obvious economic implications for anything even vaguely downstream of farming. These generally got delisted due to a lack of interest. > And typically in futures markets, entities who are hedging risk are overwhelmingly outnumbered by entities who want to make money. Many of whom also aren’t trying to predict the future. Market makers and those seeking arbitrage opportunities, for example.
[deleted]
I'm genuinely asking?
[deleted]
If you buy futures of oil at $80 due in 6 months, this is functionally identical to a prediction market that says "Will oil be priced higher than $80 in 6 months?" These differences do not actually matter for our purposes.
[deleted]
They're not, in all the ways that matter for our purposes.
Commodities futures are not the same as a prediction or betting market. The volume and interactions are different. Prediction markets, like Yud uses, are low volume hobby grade endeavors. The fact that they try to be accessible to casual participants for small pools is part of what makes them completely worthless.
Do you think you've found a particular prediction market prediction that's bad? You can make free money by betting against the prediction. It has to either be true that (1) the predictions are accurate or (2) people don't like free money (assuming the design of the market has no other flaws, and assuming infinite liquidity, which of course is never the case)
Or the market is subject to taxes, and the difference in your probability vs the markets probability isn’t large enough once you account for taxes. Or the market is sketchy enough you don’t trust you can get your money in and out reliably. If people tried to use prediction markets more, I would anticipate a lot of scams and manipulations…
As recent events have reinforced, many people just love giving away all their money to the people who like free money (cough cough crypto)

This reads like it’s from someone who hasn’t read a paper…

Bayes > Science. It’s sad, then, that he isn’t spending his time reinventing Bayes. Or, better, inventing me, the acausal robot god. I promise I’m friendly and aligned!

> I promise I'm friendly and aligned! As a Bayesian, this is evidence in favour of itself, so I have no choice but to obey.
It's a costly signal on my part!

Earth science? I’m out of the loop, what got him going about Earth science?

Is it like he’s dipping a toe into the waters of criticizing the physical sciences instead of the social sciences as some kind of thrill?

Stats isn’t the main tool of the Earth sciences anyway. I guarantee you 80% of the content in the Journal of Geophysical Research is wayyy over this guy’s head.

It’s so much dumber than that. He means science on Earth, as opposed to his fantasy world where he’s in charge. He uses that to talk about his proposals, like “on Yud-world [or whatever he calls it] there are prediction markets on marriages so divorces are unnecessary”.
oh my god
> Yud-world \[or whatever he calls it\] [Dath Ilan](https://www.lesswrong.com/tag/dath-ilan).
I never appreciated how he writes it into the rules of his fantasy land that he, and everyone else, HAS to be ignorant of history and really actually everything else.
I'm wondering if he's decided to go full crank and jump on the anti climate science bandwagon
Oh shit, that's probably it.

What a fucking idiot.

[deleted]

Not only does he (predictably) want to rely on prediction markets, but he apparently also thinks that they're potentially a *replacement* for journal publications. In Yudkowsky's mind, the only purpose of publishing scientific papers is to submit one's self for judgment so as to attain credentials. The detailed explanations of scientific work are, one presumes, just an elaborate signalling game in which one is expected to impress the reviewers by demonstrating a mastery of otherwise irrelevant social cues.

Why use a saw to fell trees for your house when all these rotten logs are lying around around already???

From some blue checkmark describing himself as

Unit Head, Supply Chain Management at FFBL | 🤖 ML & AI Enthusiast | 📊 Data Science | 🎨 Data Viz & Info Design | 🐍 Learning Python & JS | Opinions my own

in the replies:

Can replacing p-values with Confidence Intervals help?

Really seeing that expertise in data science, there.

Yudkowsky: we should do statistics good

Also Yudkowsky: uncertainty quantification is stupid

"The variance can't have a variance, that's nonsense!" -- Yudkowsky, probably
Other insane things that yudkowsky apparently believes: - it is impossible for repeated experiments to yield different results - it's pointless or fraudulent to use one number to summarize characteristics of a larger collection of numbers

Yeah. Replication sounds easy as a simple word. But has he ever considered the cost and time for some of these. Yeah, we will get right on replicating 25 year longitudinal studies.

subtweet of

Science is when you think really hard after writing a Harry Potter fanfic about thinking really hard.

If anything resembling ‘publication’ is still needed as a concept after those prediction markets are set up: require journals to accept papers for publication before the experiment is performed, on the basis of an advance description of the intended methods. Do not allow ‘journals’ to be a reporting filter based on results later observed.

Yud out here trying to turn all of academia into lesswrong.

Problem is that if we improve the way Science work, it will improve exponentially which will inevitably lead to some discovery that turn humans into goo. We should stop all Science now for the survival of humanity.

This mf paid for twitter blue.

And yud you might think this is why people have you blocked now, but dont worry I had you blocked already

I don't know what's the more cringe possibility: that he paid for Twitter Blue or that he's on somebody at Twitter's radar enough to have it paid for him :|
Considering Musk follows (followed) ssc, not impossible.

From a guy that isn’t interested in science, could you explain why those ideas are bad?

Economics and statistics. These ideas ARE fine (except the prediction markets imo), but they ignore the fact that the current system wasn’t designed top down, but emerged from the economics of doing actual science and the statistics of getting great answers from many good ones. Pre-registering trials is fined but how do you filter signal-noise then? Not conflating effect sizes sounds good, except then we would never have enough samples to do ANY meta studies; it’s also throwing the baby out with the bath water, since statistics DOES provide us the tools to combine these data points. And finally most of the system doesn’t exist on behalf of science, but on behalf of FUNDING science. I’m curious to see a funding mechanism in his model that doesn’t produce the same results we have now but with more noise and waste.
Also it's not really clear to me why you'd expect doing these things to massively improve results even if you didn't have to worry about those concerns. I'm not really convinced that, for instance, "we don't pre-register studies" is some kind of huge filter against getting useful results. There's a bunch of stuff you can fiddle with on the margins (as there is in any complicated system), but it's less obvious that Yud's particular sacred cows are the ones that are causing big problems for us.
I do actually think the fact that you can only publish positive results is a HUGE problem in some fields where recording the experiment is a small fraction of the cost of performing the experiment.
I think you're giving Yudkowsky way too much credit here. He doesn't understand even the most basic elements of statistics or probability. The effect size thing is a good illustration of this. He doesn't realize that identical experiments that measure a real phenomenon can return different effect sizes. Things like tests for statistical significance can help with that problem, but he doesn't know what those are either which is why he wants to get rid of them.
Half are okay but already in progress in being implemented in some form (as the other reply to you said, science wasn’t designed top down but developed over time and it takes time to change it), half the ideas are garbage. * moving away from p-values is in progress. It takes time to change how researchers are trained/educated and to popularize better statistical tools. * publishing more raw data is in progress. This wasn’t easy before technological advances over the past several decades so it makes sense it wasn’t as common in the past. * Some journals are already experimenting with entirely open peer commentary. We’ll see how it works out. In the meantime, preprint servers like bioRxiv have become standard, even if it is typically followed up with conventional journal submission. * prediction markets are a dumb idea. It’s already an issue with industries paying think tanks or institutes to churn out results they think will serve their interests I think this amplifies that problem. And even ignoring intentional bad actors, market can stay irrational longer than you can stay solvent. * I’m interested in the alternative funding mechanism he hints at… but I don’t think he understands all the incentives at work or the way things are actually currently done. Funding is already oriented in a way where the funding agency is able to point to the eventual practical application for any whiny congress person that doesn’t appreciate the general need for science. If Eliezer wants the funding mechanism improved, he needs to also suggest how the underlying politics behind it would be improved or how to work around these politics. In some fields, more replication might be nice, in others it’s not as important, and I don’t think Eliezer appreciates how labs typically replicate previous results in the course of preparing for a new experiment but don’t publish that replication. * in general Eliezer seems to willing to blame status games among scientists and not look at the broader context in which science exists (ie funding agencies providing grants with the funding agencies in turn getting their money from the government)
Also, *p*-values are perfectly acceptable to use in certain cases; the concept shouldn't be *discarded* you just shouldn't use it naively.
Yes, this is yet another example of Yud having no idea what the hell he's talking about. Compare Yud's opening bullet point: > Discard the idea of 'p-values' and 'statistically significant' data To the [American Statistical Association's statement](https://magazine.amstat.org/blog/2021/08/01/task-force-statement-p-value/): > ...the use of p-values and significance testing, properly applied and interpreted, are important tools that **should not be abandoned**... > P-values are valid statistical measures that provide convenient conventions for communicating the uncertainty inherent in quantitative results. Indeed, p-values and significance tests are among the most studied and best understood statistical procedures in the statistics literature. They are important tools that have advanced science through their proper application. > ... In summary, p-values and significance tests, when properly applied and interpreted, increase the rigor of the conclusions drawn from data. But Yud would never get any attention if he wasn't making bombastic claims left and right, so here we are.
I think it's really important to contextualize just how stupid the p-value thing is. What Yudkowsky is suggesting is that *we don't do uncertainty quantification* and instead just provide raw data. What does he think people do with data, exactly? It's pretty obvious that he doesn't know what statistics are or how they are used. "P-hacking" is a real problem but Yud doesn't know enough to understand what that is or why it's bad.
I think his solution of likelihood ratios works in cases where the hypothesis space can be precisely and neatly described by probabilities… which is doable for narrow sub-problems but certainly not for any scientific field as a whole, not even physics or “harder” sciences, much less anything “softer” field. He wants to make it easier to do reviews and meta analysis… but he doesn’t appreciate how many important details can be hidden away in the methods and not visible from the raw data or statistical analysis.
I think that's giving him way, way too much credit. I'm not sure that he knows what a p-value is, and I'm quite certain that he could not calculate one if you asked him to. It seems very far-fetched to me that he's making a plausible or nuanced suggestion about alternatives, as opposed to just putting out jargon salad that seems to make sense if we squint enough.
Sigh… I hate to admit knowing this… but he elaborated what he meant by using likelihood ratios in the middle of a pathfinder forum-Roleplay/fanfiction once (I don’t think he ever got around to in the Sequences, much less a tightly written paper). Anyway I think the procedure he was explaining (via his Isekai’d to Golarion self insert) was technically mathematically valid, it’s just completely useless for anything you can’t put exact probabilities on both the experimental results given different initial hypotheses and the hypotheses themselves. Within the context of a larger problem, using likelihood ratios can make sense, but to use them in isolation and as the sole final tool (as Eliezer proposes) is absurd.
Oh lol, so in other words he once saw a contrived example in a textbook and then incorrectly extrapolated that to everything else
>It’s already an issue with industries paying think tanks or institutes to churn out results they think will serve their interests I think this amplifies that problem. Curious, what do you think would improve that problem? >In some fields, more replication might be nice, in others it’s not as important, In what field is replication not important? It seems like "and if someone else does the same thing they'll get the same result and if they don't that's evidence that either I'm wrong or they didn't do the same thing quite right" is one of those core ideas that separates science from not-science.
I’m not sure what could improve the problem… journals already require conflict of interest disclosures, and that helps some. To be clear, in absolute terms, it isn’t a major source of funding (for actual academia), and in every case I can think of, correct science eventually won out. For instance with global warming deniers, lots of organizations get paid lots of money to spread disinformation but none of them have recognition from academia as legitimate. Not as important as in doesn’t need a specialized effort at replication. Like in fields or paradigms that rely on programming, publish/make available the source code and data and then anyone can rapidly reproduce your results.
He doesn't understand even the most basic elements of probability and statistics, and so he's suggesting things that are either obvious or stupid. It's the usual thing: his good ideas aren't novel, and his novel ideas aren't good. For example, the effect size thing is basically nonsense. Yudkowsky doesn't understand that effect size (and indeed any outcome in an experiment) is a *random variable* and that one would expect it to vary even with repeated, identical experiments.

Can we get a philosopher of science with specialization in whatever exactly these proposals fall under to weigh in on this?

He's just inaccurately repeating things that he's heard elsewhere because he doesn't understand even the most basic things about statistics or experimental design.

This is the new duality of america

The wording is as weird as usual, and the idea of attaching explicit prediction markets seems even worse than the traditional journal system, but other than that…. the rest is reasonable? It’s not new, but at least it’s going in the right direction? Yud has really bad takes too often, but I don’t think this is one of those

It would be interesting if it weren't some of the most basic takes on the issue (note: basic here doesn't mean they're the correct ones or that they're incorrect either -- but the prediction market one is just usual boring EY crankery). A few years ago the American Statistical Association had a series of discussions accessible at a relatively popular level about moving beyond p-values and it seems EY has of course not engaged any of that literature.
> American Statistical Association had a series of discussions accessible at a relatively popular level about moving beyond p-values and it seems EY has of course not engaged any of that literature. Every stats course I took in college talked about limitations of p-values even. It’s funny because he’s writing this in response to LeCan saying that going through phd coursework helps remove blind spots but here just a semester of undergrad would have removed the blind spot.
Academia is overhyped in so many ways, but the *one* thing Yud could have benefited from was simply being around people who were as smart, if not smarter than him. Reminds me of [Max Deutsch](https://www.youtube.com/watch?v=a_6rTnbUQOo), a professional dilettante who imagined he could beat world chess champion Magnus Carlsen after a month of study armed with nothing more than his beautiful mind and his special way of analyzing problems. Except EY plays a game where abject failure is not so clear-cut as in chess. He can make noises about solving a problem so poorly defined as to be impossible and make unfalsifiable predictions about the future for the rest of his natural life without fear of being outed as an obvious fraud. This is just a perfect squid-cloud of butthurt from being called out for having never once waded outside the bounds of his tiny intellectual pond.
Well, quite. The stuff that's new isn't sensible. And the stuff that's sensible isn't new. In other words, it's all a waste of pixels.
He has no idea at all what he's talking about. He's just inaccurately repeating things that he's heard from people who are smarter than him. Just look at the stuff about effect sizes, or p-values. He clearly doesn't know what those things are, what they're used for, why they'd be expected to vary, or under what conditions they can be misused or misinterpreted.

Under all that nonsense there are some valid points. P-Values and the definition of Statistical Significance are very flawed, and there is a lack of incentive to replicate previous experiments.

All of that gets buried under the prediction market nonsense, though.