I liked Nostalgebraists review
of the “bio anchors” report that this post springs from. The whole thing
seems like an exercise in building chains of vague guesses on top of
each other, and assuming that the more math you do with unknown inputs,
the more reliable the final result is.
Initially, the AI is designed to replicate human behaviors and expressions, because the goal of the AI is to fit seamlessly into a social ecosystem dominated by human beings while outperforming the humans they're engaging with. This is typically with an eye towards monetizing the labor of the AIs.
But as we iterate the experiment and the number of AIs grows larger, the design of the AI focuses on being mistaken for humans as a means of evading spam-control and security restrictions.
Eventually, the AIs will become so all-encompassing that *humans* will be assumed as the deviant actors. Humans will need to behave like AIs in order to navigate the digital environment, because the baseline assumption will be that its just AIs talking to AIs all the way down.
The end state of all this AI engineering is to (intentionally or not) create a digital environment that is exclusive to AIs, as the AIs are presumed better at doing human tasks than humans. And therefore, any successful monetization scheme must be entirely centered on AI interactions.
Linear scaling holds if you just cede to the assumption that AIs are *supposed* to be in charge of everything and you don't really care what they're outputting. "Successful AI" just becomes some set of bots spamming each other with monetized interactions in such a way that no human observer can intercede.
A 200 page report, yeesh. I'm convinced that being long-winded about vaguely technical things is a rhetorical position some people stake out to defeat their lower-energy opponents (I'm happily in the latter category).
>Your model is going to make an argument – somewhere inside, implicitly – whether or not you know what it is. And if you don’t know what it is, you don’t know whether it’s any good.
I think this is the fundamental problem of Yud. You can itemize the assumptions in here and they're his common ones. Eg, "whatever technique is most recent just needs to be linearly scaled in order to get general intelligence," "moore's law will hold up for as long as I need it to for this to happen," "the politico-economic environment will always encourage this."
You can actually imagine him writing this paper in 1810, and being like "Gauss's new 'least-squares' technique will provide superhuman intelligence if we can train the lower orders to do matrix multiplication, because obviously the amount of serfs will increase exponentially for the next 100 years."
My personal probabilities are still very much in flux and not robust.
Not robust means that further arguments and evidence could easily change
my probabilities significantly, and that it’s likely that in hindsight
I’ll think these numbers were unreasonable given the current knowledge
available to me.
I wish these people were willing to just say "I'm pulling shit outta my ass". The inability to speak plainly is almost more annoying than the things they are actually saying.
I liked Nostalgebraists review of the “bio anchors” report that this post springs from. The whole thing seems like an exercise in building chains of vague guesses on top of each other, and assuming that the more math you do with unknown inputs, the more reliable the final result is.
[deleted]