r/SneerClub archives
newest
bestest
longest
Paul Christiano calculates the probability of the robot apocalypse in exactly the same way that Donald Trump calculates his net worth (https://www.reddit.com/r/SneerClub/comments/13peat6/paul_christiano_calculates_the_probability_of_the/)
76

Paul Christiano’s recent LessWrong post on the probability of the robot apocalypse:

I’ll give my beliefs in terms of probabilities, but these really are just best guesses — the point of numbers is to quantify and communicate what I believe, not to claim I have some kind of calibrated model that spits out these numbers […] I give different numbers on different days. Sometimes that’s because I’ve considered new evidence, but normally it’s just because these numbers are just an imprecise quantification of my belief that changes from day to day. One day I might say 50%, the next I might say 66%, the next I might say 33%.

Donald Trump on his method for calculating his net worth:

Trump: My net worth fluctuates, and it goes up and down with the markets and with attitudes and with feelings, even my own feelings, but I try.

Ceresney: Let me just understand that a little. You said your net worth goes up and down based upon your own feelings?

Trump: Yes, even my own feelings, as to where the world is, where the world is going, and that can change rapidly from day to day…

Ceresney: When you publicly state a net worth number, what do you base that number on?

Trump: I would say it’s my general attitude at the time that the question may be asked. And as I say, it varies.

The Independent diligently reported the results of Christiano’s calculations in a recent article. Someone posted that article to r/MachineLearning, but for some reason the ML nerds were not impressed by the rigor of Christiano’s calculations.

Personally I think this offers fascinating insights into the statistics curriculum at the UC Berkeley computer science department, where Christiano did his PhD.

Shoutout to this guy on r/MachineLearning:

It’s fascinating how people who really should know better keep pulling random percentages out of the ether and are acting like it means anything. Like, they should know that probabilities usually mean something right?

>keep pulling random percentages out of the ether and are acting like it means anything I think it's useful to assign numbers to beliefs even if those numbers are low in precision. Say I'm trying to estimate Ron DeSantis' chance at becoming elected president in 2024. Without using numbers, I could make one of these statements: 1. I think it's likely DeSantis will become president (implying > 50%) 2. I think it's unlikely he'll become president (impling < 50%) 3. I think it's possible he'll become president (implying > 0%) This is an ineffective way to communicate since there's a big difference between thinking he has a 33% chance of becoming president and a 10% chance but that difference is hard to get across with just plain english. Therefore it's just clearer (and more fun imo) to assign a low-precision estimate to your beliefs than to rely on *even lower precision* English to communicate them instead. If you're a math nerd you might've heard of [Fermi estimates](https://en.wikipedia.org/wiki/Fermi_problem). Essentially, this technique involves making a bunch of educated guesses and doing math on them to approximate quantities that would be challenging to predict without any information. Despite the crudeness of the method, Fermi estimation produces surprisingly accurate results, which shows the value of rough estimates when trying to form a worldview. Basically these crude guesses are not as dumb as they seem and the only alternative, which is using plain English, is straight up worse. I think the OP of the r/MachineLearning thread linked to a blogpost that pretty much said the same thing.
Fermi's estimates were based on empirical observations and the laws of physics, not on Fermi's feelings.
Perhaps you're thinking of his work in physics but that's not what I'm referring to here. Fermi estimates or fermi problems are basically problems where you do napkin math with reasonable sounding but made-up numbers to estimate things and the results can be surprisingly accurate or insightful for what basically amounts to a shot in the dark The classic example for this is figuring out how many piano tuners there are in chicago without looking up any numbers: https://www.grc.nasa.gov/www/k-12/Numbers/Math/Mathematical_Thinking/fermis_piano_tuner.htm
My point here is that your emotions don't constitute empirical data, so when you turn them into numbers and do math at them you're really just making stuff up and doing so the hard way. Numbers, as it turns out, are not literally magic.
>My point here is that your emotions don't constitute empirical data I don't think Christiano's post was intended to be some sort of super rigorous argument with precise calculations. From my reading of it he's basically saying "Look, people keep asking me so here are my thoughts on AGI ruin with some numbers attached but take them with a massive grain of salt." Maybe you think the numbers are dumb and it would be less silly if he just wrote a regular essay without estimates. Part of that's down to personal taste but like I said, words can be even *less* accurate than made up numbers so I think this style helps you communicate more if you think about it. Also putting numbers on beliefs (even made up ones) can help you fine tune your beliefs better than you can without them. Going back to DeSantis (my apologies if you're not an American): Say I don't have a good estimate in my head for DeSantis' shot at the white house but I believe 1. DeSantis has a 40% chance of beating trump and winning the Republican Party nomination 2. If he wins the nomination, he has a 40% chance of beating Biden in the election and becoming president Then 0.4 x 0.4 = 0.16 Therefore I *should* believe that he has a 16% chance of becoming president at the moment and if I think that 16% estimate is off then one of my assumptions must also be off. This is the type of reasoning you can't do with just words alone and that's why putting numbers (even super imprecise ones) on beliefs is a useful tool.
>I don't think Christiano's post was intended to be some sort of super rigorous argument with precise calculations. *Then he shouldn't be using numbers.* It's not personal taste, it's just that I know what numbers mean and how they work. EDIT: Also, are you *sure* that 0.4 x 0.4 is 0.16 in this case? Think on that.
>Then he shouldn't be using numbers. I've already given you multiple reasons why numbers serve a useful purpose outside of math and science papers. I don't know what more I can tell ya. You can look at the wiki link for fermi estimates if you want a more rigorous argument for why ballpark estimates work well. >>EDIT: Also, are you sure that 0.4 x 0.4 is 0.16 in this case? Think on that. I looked at it again and the math checks out in my head. If there's a mistake here you're free to point it out.
It's not hard to think of a reason why that calculation might be wrong if you have a proper education in probability and stats. I get that you want the comforting certainty that comes with assigning numbers to your beliefs, but maybe it's better not to use math until you understand it correctly.
I'm asking in good faith, if I'm wrong about something obvious here I'm genuinely curious to hear what it is.
Are you referring to the possibility of other candidates here? I was making up a toy example and assuming that only trump, desantis, and biden were relevant.
Ok scratch everything I've said under this post, this guy says it all so much better than I ever could!
So you're just gonna imply that I have no proper education and I'm obviously wrong with zero elaboration. That's one hell of a way to argue man.
>you do napkin math with reasonable sounding but made-up numbers to estimate things and the results can be surprisingly accurate or insightful for what basically amounts to a shot in the dark While the NASA link doesn't address this, the Wikipedia page is pretty explicit about the fact that Fermi-style back-of-envelope estimations are a \*learning exercise\* in order to surface and refine underlying assumptions in order to move in the direction of testable hypotheses - this is well and good! But there is absolutely nothing about surprising accuracy or insight, apart from the transparency and testability provided by writing out your work. Christiano isn't providing any actual back-of-the-envelope information about his calculations, just the final figures. Without those assumptions articulated, these numbers are no more useful than a preacher talking about the End Times, or someone posting their NCAA bracket for friends. The fact that his numbers are fiddly makes it even more suspect. 22%, 9%, and 11% strongly suggest that either 1) he's got some complicated model somewhere multiplying out a whole heap of other numbers, each of which would also need to have its assumptions spelled out, or 2) he's making up numbers to describe his gut feelings, and choosing fiddly ones because 22%/9%/11% \*seems\* more precise and smrt than 25% or 10%. Your point about comparing degrees of uncertainty is well taken, but I'm not really sure how that would apply here. If I try REAL hard, I could imagine that Christiano shared these numbers in order to... reassure folks in the LW community who think the probabilities are higher? Somehow, that doesn't seem likely. Do you see some purpose that he could have for sharing them that I might be missing?
> he's making up numbers to describe his gut feelings, and choosing fiddly ones because 22%/9%/11% \*seems\* more precise and smrt than 25% or 10%. He says at the beginning of his post that his estimates have less than one significant digit of precision. And yet all of the numbers he provides are specified to two significant digits. Either he's dumb, or he thinks the rest of us are dumb.
>Christiano isn't providing any actual back-of-the-envelope information about his calculations, just the final figures. Without those assumptions articulated, these numbers are no more useful than a preacher talking about the End Times, or someone posting their NCAA bracket for friends. No you read that completely correctly. That's precisely what he's doing. It's a casual blog post and the numbers are just a matter of style. That's just how people on that site like to communicate their worldviews and I think [it offers some advantages](https://slatestarcodex.com/2013/05/02/if-its-worth-doing-its-worth-doing-with-made-up-statistics/). >but I'm not really sure how that would apply here For people on LW, AGI ruin is a recurring topic obviously and so they like to numerically quantify how likely they think it is to happen. That lets you can compare different people's viewpoints on a(n imprecise) sliding scale. For example Yud think it's north of 95% likely, George Hinton has implied he thinks the chance is somewhere around 40%, etc. >the Wikipedia page is pretty explicit about the fact that Fermi-style back-of-envelope estimations are a *learning exercise* Well to be clear, the wiki article is on "Fermi problems" which are indeed learning exercises for teaching *Fermi estimations* which are not learning exercises in themselves. Fermi estimates are just a general technique for reasoning when working with really low info and you can apply that technique in lots of different places.
"Surprisingly accurate" in the context of Fermi estimates means (AFAIK) "usually within an order of magnitude." It's useful for doing a sanity check on an experimental result, and it's only surprising because you normally shouldn't get results that good by just pulling numbers out of your ass. You say that using English without numbers is less precise, but I think that's wrong - using a single number like Christiano conveys an inflated sense of confidence (even after adding an essay around the number talking about how the number shouldn't be trusted as a number). Because then you get problems like having articles written with titles that are scientific in nothing but aesthetic. Subjectively, I would have more readily understood this if he had just said "it's not very unlikely." If an expert gives me a number it's tough to understand it as something that _isn't even measurable._
From gauging everyone's opinions here I think the objection to him using numbers is a case of culture shock and I'd like to explain why. >using a single number like Christiano conveys an inflated sense of confidence >If an expert gives me a number it's tough to understand it as something that isn't even measurable. To an outside audience who only ever sees numbers used in rigorous calculations, yes it absolutely does convey an inflated sense of accuracy or confidence. Most of the comments here are saying just that, that the numbers are just decoration and adopting the aesthetics of science without any actual rigor behind them and that people should only invoke numbers when there's precision. I can sympathize with that viewpoint if you're not used to this type of writing style. For his intended audience on LessWrong though, it's kind of a norm for them to quantify every belief by default (even if there's no precision to these numbers) and in that sort of cultural context, numbers don't really have that same connotation with rigor and precision. Now you're completely free to think that this dumb and it's a silly norm. I think it serves a useful purpose but that's a separate debate. Point is, I think people here may be reading intentional overconfidence behind the numbers where there isn't any and that's just a case of culture shock.
> For his intended audience on LessWrong though, it's kind of a norm for them to quantify every belief by default (even if there's no precision to these numbers) Ah, classic cult logic: it can't be dumb if *all* of you do it, right? In case you didn't get a chance to look at the sidebar, the purpose of this subreddit is to post critical and derisive things about rationalists and rationalism, specifically because they have a culture of doing things like convincing themselves of absurdities by doing bad math.
I never said it's not dumb because people do it. I'm saying it's not intended to ape the aesthetics of rigor just because it has numbers in it. You're free to think what they're saying is absurd but if it's absurd it's because the argumentation is absurd not because they're using numbers.

I assign a 98.562378% probability that this guy is fantastically full of shit

Sometimes it’s a beautiful day, the sun is shining, and you feel like a billionaire living in a world that is very unlikely to be destroyed by rogue AI.

Señor Joe, the numbers don’t lie, and they spell disaster for you at the Singularity!

this is why EA has always felt so cracked to me, though I’m open to counterarguments. you’re calculating the expected value of things based on probabilities that are “just trust me bro”? then what’s the point of trying to quantify anything if you’re in the end still just making a judgement call?

Sorry, but don’t we usually laugh at these people for assuming their numbers represent actual reality? Yet now that he says “these represent rough estimates of my fluctuating beliefs and should definitely not be taken as objective reality” we are… still laughing at him?

If your belief is changing from 33% to 66% day by day, based on nothing more than vibes, the numbers are useless and you should just say "I don't have enough information to make a meaningful guess", or give the actual range, or anything other than this pseudo objective nonsense.
I find it very entertaining that they *need* to give things a number but routinely [are unwilling and unable to give a formal argument for AI doom](https://www.reddit.com/r/SneerClub/comments/12ofv59/david_chalmers_is_there_a_canonical_source_for/?utm_source=share&utm_medium=ios_app&utm_name=ioscss&utm_content=2&utm_term=1) Like, they’ll make fun of the idea of a qualitative value for basically being too imprecise but can’t formalize their own argument! I simply can not overstate how asinine it is.
The End Times™ are Right Around the Corner© Dont delay. Buy today!
I predict a 25-33.3% chance that this posters statement is 50% invalid, and a 67% probability the statement is 50% valid.
Hard to argue with facts like that!
I agree that these would be preferable. I personally do the first one. But I guess he's trying to speak with the LWers in their language, which we aren't.
The language he's using is sneerable, and so he has been sneered at.
I guess that makes sense
The fact that he knows he's using numbers incorrectly doesn't make it better, it makes it *worse*.
Tbh I think it is probably the first step in realising the whole Rationalism stuff is dumb. So hope he gets there eventually.
This dude did an entire PhD in AI doomerism. I don't think this is evidence that he might be waking up, I think it's evidence that he has the best rationalization skills that the higher education system can produce.
Well shit... I had no idea about his background, I thought it was just some random figure on LW, didn't realize he was an former openAI alignment guy.
Oh yeah, he's one of the major figures in AI safety. Leads a rival camp to Yud, I guess. I don't know him personally, but his ideas of doom do read [very different](https://www.alignmentforum.org/posts/HBxe6wdjxK239zajf/what-failure-looks-like) from Yud's. Although it's been some time since I read that, so maybe I misremember.
AFAIK he dropped out of a quantum computing PhD. His thesis is [here](https://escholarship.org/content/qt0w22c86t/qt0w22c86t.pdf).
He *completed* a computer science PhD, and his dissertation is the document you linked. Note the title and abstract: > Manipulation-resistant Online Learning > > Learning algorithms are now routinely applied to data aggregated from millions of untrusted users, including reviews and feedback that are used to define learning systems’ objectives.If some of these users behave manipulatively, traditional learning algorithms offer almost no performance guarantee to the “honest” users of the system His dissertation is about trying to prevent computers from becoming evil, because his entire motivation for doing the program was finding ways to prevent the robot apocalypse.
This is not true. In the abstract, the only people being described as being evil are the users, not the algorithms.
Lol no. It's about preventing computers from becoming evil at the behest of evil users. If you train a machine learning model to be evil, then it becomes evil. The connections to the robot apocalypse mythology are pretty obvious. I, for one, am pleased that UC Berkeley still has enough standards that they forced him to write about something of realistic technical relevance rather than letting him go full mask-off with the AI doomerism.
Actually, good point, it is true that malicious data can create malicious algorithms. Timnit Gebru's work related to facial recognition software is probably a good example of this. See also the issues related to predictive policing. But this is different from AI doomerism I think.
It's not different from AI doomerism. Paul Christiano believes that there is a significant risk that evil AI will destroy all of humanity, and that's why he did this research. You can't get a PhD by trying to write a dissertation about preventing the arrival of the robot god, because that's absurd, but you *can* get a PhD by trying to find ways to stop "users" from turning computers evil, while expecting that the same research might apply to preventing the robot god from being evil. It won't actually apply to stopping the robot god, of course, because the robot god isn't real, but that's what Christiano is thinking.
Are you referring to his undergrad work with Aaronson? I think Paul settled on his interests pretty early on in his graduate studies. He gave a talk in the logic seminar at Harvard in 2013 (titled "Probabilistic metamathematics and the definability of truth") that you can find on the MIRI youtube channel, for instance.
True, he has a couple publications on quantum computing but you might be right about him not starting the quantum computing PhD.
you might think that, but he's been a cultist for years
Yeah I had not heard of the guy, I thought he was a random LW user not counter example X of 'why do you guys care about LW, they are nobodies and nobody listens to them'.
its also equally laughable to assign a firm probability to a thought process that comes down to “lol so randum xD!” and to lack self-awareness of your own fallibility…even though its right there…in writing!
You need to listen to me, there is an incredibly high chance that AI is going to kill us all. No, really, we are on the cusp of an AI holocaust and if we don't start doing drastic things TODAY, like bombing data centers, generations will suffer immensely and the luckiest of our grandchildren will be spared from our paperclip-maximalist hellscape future by being graciously sacrificed at the feet of the monuments they erected to mock our stupidity and inaction at this pivotal moment in history. Also, there's a HUGE chance that I'm also making these predictions up to a pathological degree or something, there's a very small chance that these figures mean nothing at all too lol. Like ugh FOR REAL it's literally going to be the end of the world if the food I ordered doesn't get here in the next ten minutes, I'm literally starving. You're all going to have to start digging your bunkers RIGHT NOW if you want a chance to be deep enough that my ravenous hunger doesn't devour you and everyone on Earth into muh belly xD
Yes.
I get what you’re saying, but I think it’s funny that the object of the sneer can’t simply say “hmm, I’m not sure, but right now I think it’s more/less likely” but instead must clothe guesswork in the appearance of mathematics just to be noticed Like he’s guessing. He’s just fuckin guessing. Which is fine, love to guess, I guess all the time. But if you say “this is a guess” the community gives you faint notice so you have to say stupid shit like “MY PRIORS UPDATED NOW 47.9017% PROBABILITY.” very very funny to have a community founded on *rational thought free of shibboleths* to nevertheless fall into its own shibboleths
Is that not literally exactly what he said, "I don't know but here's some rough numbers about how more or less sure I'm feeling right now"? Getting slightly more specific than just above or below 50% doesn't seem like a huge leap to me, or is being about 75% confident A will happen while not discounting B's possibility not an allowed mental state? That's not rhetorical nor is it a defense of Christiano, I'm just seriously attempting to understand what this community's position is
I’m criticizing the “numbers” portion; I don’t think he’s doing the numbers thing so much from wanting to get across a shade of uncertainty as he is from wanting to portray guesswork as mathematically “sound”
The quotation provided begins by explicitly stating that this is a guess, and that the numbers represent a snapshot of his beliefs at one time, not concrete attempts to predict the future. I get that they're comically precise but isn't that an almost inevitable artifact of assigning probabilities across more than 2 outcomes? On that note, isn't assigning ballpark relative values to uncertain situations with multiple outcomes a pretty effective and common way to talk about them without being vague, confusing and hard to follow?
I agree with what you’ve said, but I’m afraid none of what you said really goes against what I’ve said; I’m saying that this percentages talk is emblematic of a particularly annoying worldview / group of people that I find both irritating and incorrect
Well you're certainly entitled to that opinion, but I'm pretty sure the rest of the world will keep using percentages to discuss probabilities regardless. Oh except for the first sentence of your first comment, thats not an opinion it's just categorically wrong.
okay I’ll try one more time but slowly. the first comment I made waaaaaaaaaay up there was dicking around about guesswork, but contained the general point “why do they clothe everything in math?” you responded defending his probabilistic spread. I clarified that what I was really annoyed about wasn’t the use of percentages, so much as the tendency (again) to rely on signifiers of Logical Thinking Reason for everything to convey the air of intelligence. you responded, once more, missing my criticism, defending using percentages to express uncertainty. I said that I agreed with the idea of using percentages to express uncertainty, but that you hadn’t actually touched the point I was making, which I won’t repeat here because I’ve said it enough. and predictably you responded missing my point for the third time. but uh, you’re entitled to your opinion too? sorry to hear I’m categorically wrong :( I assign a 0% probability to me being correct in the future and a 100% probability to you coming away from this exchange thinking that you “won” or something idk how math works :/
My apologies, I could have phrased that less abrasively, my playful sarcasm translates horribly in text. By saying you're entitled to that opinion I meant to acknowledge that I'd realized it was merely a difference of opinion, you find his way of expressing things pretentious and annoying which is totally fair, there are plenty of people I feel the same way about for similar reasons, there's no objective right/wrong answer there, either someone annoys you or they don't. The categorically wrong statement was just meant as a lil parting jab (and it was the second sentence actually) about you're saying he couldn't say "I'm guessing" when he did say exactly that. Again, sorry if I came across as overly antagonistic, I appreciate the discussion here and value this sub as one of the few dissenting voices Ive seen on this topic, y'all are single handedly preventing me from sliding unwittingly into an echo chamber and I should be nicer to you for that lol
The use of probabilities to communicate vague emotional states about serious topics is bad communication and bad thinking, because it incorrectly implies that the opinion being communicated is based on some kind of sound reasoning and empirical evidence. You can see the consequences of this in [the article from The Independent](https://www.independent.co.uk/tech/chatgpt-openai-ai-apocalypse-warning-b2331369.html), which credulously reports Christiano's probability estimates as if they're real numbers and not made-up nonsense. The concept of "P(DOOM)" (i.e. probability of robot apocalypse) has been explicitly cited, in serious tones, in recent hearings in the US Congress, where legislators are considering regulatory issues. This is despite the fact that *every so-called "P(DOOM)" estimate is entirely made up horseshit,* exactly the same as Christiano's statements here. Worst of all, thinking in these terms makes Christiano (and other rationalists) feel a lot more confident in their beliefs than they should be, because it lets them inappropriately launder their emotions into mathematics. How else could someone with a PhD in computer science convince themselves that their religious beliefs constitute sound science?
It certainly can do all those things, and media treating these numbers seriously is problematic, but I'm not sure how either of those things really apply in this case. The piece explicitly signposts that the numbers are 1) representative of his current intuitions and their relative stregths, not truths about the world 2) that they are subject to change and revision and 3) that they are just guesses. I'm sympathetic to the overall I think you're making about the undue certainty many characters in this field project onto their claims, and that they might deliberately cultivate an attitude of credulity and blind trust in their audience, but I really don't think is a good example of that.
This is in the not even wrong territory here. Assigning numbers to vague feelings is just purely about the aesthetics of scientism. He might as well puzzle over the proper color of his intuitions if he was among the more artistically-inclined. It’s just words.
Let's say I know that one of three things is going to happen, but I'm unsure which and I don't have a way to rigorously solve it, but what I know so far inclines me to think A is very likely, B quite likely and C is unlikely. Provided I signpost it as merely my best current guess, what is wrong with expressing that as 50% confidence in A, 45% in B and 5% in C? And PLEASE express that idea in terms of color, I beg you! I'm seriously confused how we're having a debate over using percentages to talk about future possibilities or ones intuitions about them, this is like one of the main things we use percentages to talk about is it not? Edit: I think we may be talking past each other, to clarify what Im defending is merely the validity and usefulness of quantifying personal beliefs about the future in percentage terms, which is what I percieved you as attacking. If you're claim is merely that these numbers don't represent actual knowledge (beyond about what's in the speakers head), or that they shouldn't be interpreted as such, or that Christiano is deliberately construing them as such and should stop, then we agree. If you're objecting to the very notion of describing thoughts in this way then I'm utterly baffled, I don't know how to respond to the rejection of something that basic and common
man, metacognition just… isn’t your thing, huh
> he piece explicitly signposts that the numbers are Ain't nobody got time for that. Effective communication consists of saying things that will put accurate ideas into other people's brains, not going on at length about obscure caveats that only make sense to your cultish ingroup. There's no excuse for Christiano's writing here, especially for someone as educated as he is. It's garbage cognition.
If these numbers are rough estimates, then he'd be much better off giving a credible interval - this is the range in which i think the probability could fall, with probability 95%. Which would also help evaluate his claims - if it's something like a range of 0.3 to 0.7, that's a very rough estimate, but if it's a range of say 0.1 to 0.9 then it's completely useless
I have to agree with you on this one.

My priors are 33% more accurate than the average sneerer’s, but I can empathize with sometimes being wrong, maybe even often so, and how that might make such reasoning feel whimsically haphazard from a simpler perspective, epistemically speaking.

To the layman, Bayesian thinking might seem like it’s “arbitrary”, “dumb” or even “just utter dog shit”, but in the sciences that matter, it can be 12 times more likely to predict future outcomes than the more primitive methods used in softer intellectual disciplines.

Looking at it objectively, there are more Bayesians on the side of important feats like landing on the Moon and pushing Moore’s law to its limits versus spending six decades trying to prove that kids eating marshmallows is racist or whatever the focus of the humanities’ mindshare has been all of this time.

Incredible work, but I think you might need a "\s" here, this is way too real to confidently tell that it's a joke.
Bayesianism is technically inferior to frequentism and strictly inferior to propensitism.
a bayesian, expecting to see a cow catches a glimpse of a donkey and confidently exclaims, "i have seen a mule!"

Here’s the deal. Either

  1. Christiano deserves criticism for being unsure about his beliefs about AI ruin OR

  2. Yudkowsky deserves criticism for demonstrating unwavering certainty bout AI ruin

You can’t raise both of those criticisms while staying consistent

People are complaining that his uncertainty is in fact much higher than he's presenting it as, not that he is unsure in the first place. If you state highly varying probabilities day-to-day then you're just engaging in a weird social game, not attempting to quantify real beliefs. Not every twitch of your mind is an epistemic evaluation, and it is weird to half-acknowledge this and then double down on the exercise anyway.
[deleted]
> Christiano should stop putting numbers on things he can't properly quantify. I responded to someone else in this thread about this but basically [low precision estimates are not as silly as you think they are](https://www.reddit.com/r/SneerClub/comments/13peat6/paul_christiano_calculates_the_probability_of_the/jldm6kd/?context=3)
Lol Christiano's beliefs are absurd irrespective of how uncertain he is about them. Estimating that there's a 50% chance that unicorns exist doesn't make you less of a kook than the guy who says it's a 100% chance. And are you seriously defending the guy who wrote this? > I’m giving percentages but you should treat these numbers as having 0.5 significant figures. > > Probability of an AI takeover: 22% The number 0.22 has *two significant decimal places of precision*, not "0.5". Did he find his PhD in a cracker jack box or something?
BUT. He's saying that theyre written as exact percentages BUT aren't meant to be taken as certainly as that might imply. This isn't the gotcha you think it is.
No, that's not how this works. When people who know how numbers work write things like this, if they have one significant digit of precision then they'll either write something like > 20% or, even better, something like > 20% +/- 5% Writing > My estimate is 22% (but ignore that second two on the end lol) is incompetent or dishonest. It suggests that he either doesn't know what precision even means, or he wants to trick us into being more convinced by his numbers than he himself is.