r/SneerClub archives
newest
bestest
longest
Just came across this gem: "So You Want To Be A Seed AI Programmer" (http://nostalgebraist.tumblr.com/post/99912030319/the-effectiveness-of-miri-debate-seems-to-have)
10

Oh god. Yudkowsky’s AI ideas, and especially his older stuff, is hilariously awful. That’s not even the best document, that’s just a recruiting post. Somewhere out there is a very long document that lays out almost the technical details of exactly how his AI will work. It was called “logical organization of general intelligence (LOGI)” or something like that. It changed a number of times over the years until he removed it entirely.

As I understand SIAI actually did attempt to build “seed AI” with his dream team. And then they hit a wall when they realized exactly how ridiculously hard AI is. And that his ideas were unfeasible, or at least nowhere near sufficient. Yudkowsky is firmly in the so called “Neat” camp of AI. The idea that there exists a perfect mathematical algorithm for intelligence that we can discover. That everything will be mathematically elegant and provable. Most AI work today is “Scruffy”. Done by people who don’t care about mathematical elegance or complete understanding of why everything works. Heuristic based, iterative, and messy.

MIRIs work today continues that tradition. It’s good they are now actually producing research. But it is very mathematical with little connection or relevance to existing AI methods. It’s all very impractical to actually implement, requiring computers the size of the universe or larger. I don’t think it’s entirely worthless, but I think it’s the wrong approach personally. And in any case, it’s a long way away from building an FAI.

> Somewhere out there is a very long document that lays out almost the technical details of exactly how his AI will work. https://web.archive.org/web/20010202042100/http://singinst.org/CaTAI.html (Coding a Transhuman AI) https://web.archive.org/web/20010213215810/http://sysopmind.com/sing/plan.html (original plan for the Singularity Institute) https://web.archive.org/web/20010606183250/http://sysopmind.com/singularity.html https://web.archive.org/web/20010309014808/http://sysopmind.com/eliezer.html
>MIRIs work today continues that tradition. It's good they are now actually producing research. But it is very mathematical with little connection or relevance to existing AI methods. It's all very impractical to actually implement, requiring computers the size of the universe or larger. I don't think it's entirely worthless, but I think it's the wrong approach personally. And in any case, it's a long way away from building an FAI. I'm not a huge fan of MIRI's approach either and the technical reviewers for their most recent grant expressed similar concerns, but it's a mistake to dismiss them as if they simply haven't kept track of the last few decades of progress in ML. They do have reasons for doing what they do. Their technical agenda spells it out a little better. The important thing to note with them is that they don't think it's likely for general artificial intelligence to be derived straight out of current types of systems (who does, honestly?). They're trying to poke at underlying principles of intelligence which could be applicable to a wide range of artificial systems. Future paradigms of machine learning could be very different from today's, so in the absence of that knowledge you start with bare theorems. Any applied, hands-on work which directly relates to real systems is much easier to do if you wait for those systems to come closer to fruition, so in the meantime this is seen as a more productive avenue. They're not too concerned with computational expense because (a) they expect both algorithms and computing power to be improved significantly between now and whenever general intelligence becomes a thing, (b) they're really laying groundwork as described above and (c) they want their work to remain theoretical for the time being because they don't want to accelerate generic progress in the field. Mostly (a), I believe. As for actually building FAI - I'm not saying they *don't* intend to, but the only reason I ever realized this was a real plan is because of this kind of pre-MIRI Yudkowsky stuff. None of the papers I've read nor the comments I've seen from them would have hinted to me that they actually intended to ever build such an entity themselves.
> I was involved in GamerGate from the very beginning under another name. While I agree with many of the ideals of the movement I diagree with the approach taken. I wanted the group to become a legitimate organized political group. If they had then by now they would be an active part of the national/global debate. Instead despite accomplishing some very important things the general impression of GG remains completely negative which I find rather sad. Your post history is taking me to such magical spots on reddit, full of such wonderful people
>ACTIVE PART OF THE NATIONAL/GLOBAL DEBATE ah yes, a bunch of nerdboys whining about women in video games, what an important addition to mainstream discourse
What? Where are you getting that from? I never comment on GG.
From the /r/WikiInAction thread you commented on.
I never even knew there was a place on Reddit to bitterly complain about how Wikipedia and Rationalwiki are filled with bluepilled SJeW cultural marxists, but it turns out there is.
Back when I was more active on Wikimedia sites, I occasionally lurked the Wikipediocracy forum (disclaimer: I never had an account there), which is where all the butthurt banned Wikipedia editors and trolls hang out. One of the incidents I remember was when WMF executive director Lila Tretikov's boyfriend created an account there and started participating. /r/WikiInAction seems to be the GamerGate version of Wikipediocracy.
Ah, okay. I was never on that sub before yesterday. Yeah, I didn't really realize that they were all there to complain about basically the same thing. I Am Not A GamerGater, we can get that disclaimer out of the way.
There were one or two subtle clues that might have tipped you off
You underestimate how out of touch I am.

Ah, nostalgebraist. I like that guy. Writes some damn good fiction too - floornight and the northern caves are excellent works of original nerd fiction.

This is the early Eliezer Yudkowsky style I first read over a decade ago, and the kind I like best: unabashedly audacious, full of fire and vigor, attacking the problem of AGI with drawn steel - I keep it yet among my pool of intellectually inspirational documents.

>drawn steel A cheap katana he bought off ebay for ten times its actual price. I prefer the later, more obviously skynet-fearing Yud. I just find it funnier, although this piece is impressively stupid in its way.