Paul Christiano’s recent LessWrong post on the probability of the robot apocalypse:
I’ll give my beliefs in terms of probabilities, but these really are just best guesses — the point of numbers is to quantify and communicate what I believe, not to claim I have some kind of calibrated model that spits out these numbers […] I give different numbers on different days. Sometimes that’s because I’ve considered new evidence, but normally it’s just because these numbers are just an imprecise quantification of my belief that changes from day to day. One day I might say 50%, the next I might say 66%, the next I might say 33%.
Donald Trump on his method for calculating his net worth:
Trump: My net worth fluctuates, and it goes up and down with the markets and with attitudes and with feelings, even my own feelings, but I try.
Ceresney: Let me just understand that a little. You said your net worth goes up and down based upon your own feelings?
Trump: Yes, even my own feelings, as to where the world is, where the world is going, and that can change rapidly from day to day…
Ceresney: When you publicly state a net worth number, what do you base that number on?
Trump: I would say it’s my general attitude at the time that the question may be asked. And as I say, it varies.
The Independent diligently reported the results of Christiano’s calculations in a recent article. Someone posted that article to r/MachineLearning, but for some reason the ML nerds were not impressed by the rigor of Christiano’s calculations.
Personally I think this offers fascinating insights into the statistics curriculum at the UC Berkeley computer science department, where Christiano did his PhD.
My source is that I made it the fuck up
Shoutout to this guy on r/MachineLearning:
I assign a 98.562378% probability that this guy is fantastically full of shit
Sometimes it’s a beautiful day, the sun is shining, and you feel like a billionaire living in a world that is very unlikely to be destroyed by rogue AI.
Señor Joe, the numbers don’t lie, and they spell disaster for you at the Singularity!
this is why EA has always felt so cracked to me, though I’m open to counterarguments. you’re calculating the expected value of things based on probabilities that are “just trust me bro”? then what’s the point of trying to quantify anything if you’re in the end still just making a judgement call?
Sorry, but don’t we usually laugh at these people for assuming their numbers represent actual reality? Yet now that he says “these represent rough estimates of my fluctuating beliefs and should definitely not be taken as objective reality” we are… still laughing at him?
My priors are 33% more accurate than the average sneerer’s, but I can empathize with sometimes being wrong, maybe even often so, and how that might make such reasoning feel whimsically haphazard from a simpler perspective, epistemically speaking.
To the layman, Bayesian thinking might seem like it’s “arbitrary”, “dumb” or even “just utter dog shit”, but in the sciences that matter, it can be 12 times more likely to predict future outcomes than the more primitive methods used in softer intellectual disciplines.
Looking at it objectively, there are more Bayesians on the side of important feats like landing on the Moon and pushing Moore’s law to its limits versus spending six decades trying to prove that kids eating marshmallows is racist or whatever the focus of the humanities’ mindshare has been all of this time.
Here’s the deal. Either
Christiano deserves criticism for being unsure about his beliefs about AI ruin OR
Yudkowsky deserves criticism for demonstrating unwavering certainty bout AI ruin
You can’t raise both of those criticisms while staying consistent