r/SneerClub archives
newest
bestest
longest
8

[deleted]

Thiel compared himself to Romulus and Remus? Hooooooly shit. I guess I shouldn't be surprised.

Not sure if this is a repost, it’s the newyorker’s profile of Sam Altman, mostly focusing on his general weirdness, YC, OpenAI, and his relationship with Paypal Mafia Sugar Daddy Peter Thiel.

Some choice lines:

“there’s absolutely no reason to believe that in about thirteen years we won’t have hardware capable of replicating my brain”

that’s right. absolutely none.

“we learn only two bits a second”

This man has clearly not been learning two bits a second for quite some time.

If the A.I. that they develop goes awry, we risk having an immortal and superpowerful dictator forever.”

I always feel like people who say things like this have never programmed a computer. Most software barely works and you expect it to magically become a god/world-spirit/daddy replacement overnight?

We don’t plan to release all of our source code,” Altman said. “But let’s please not try to correct that. That usually only makes it worse.

This just makes actual researchers’ lives harder trying to reproduce the work you’re supposedly doing. This isn’t good science and it’s not helping anyone but YC/Musk/etc’s publicity machine. This sort of attitude towards computer science legitimately pisses me off. Our field doesn’t need more hurdles to experimental reproducibility, and it doesn’t need more proprietary software.

“Our goal right now . . . is to do the best thing there is to do.”

To work on our cult/vanity project that doesn’t actually produce any real AI research

that’s the unsettling thing about neural networks—you have no idea what they’re doing, and they can’t tell you.

because gradient descent is magic.

We need to level up humans, because our descendants will either conquer the galaxy or extinguish consciousness in the universe forever.

“we help the bad founders look indistinguishable from the good ones.”

for a bunch of guys who have really optimistic ideas about human evolution, they sure don’t understand how natural selection is supposed to work

Alumni view themselves as a kind of keiretsu

mandatory fetishization of east asian business culture, because conglomerate is a dirty word.

>I always feel like people who say things like this have never programmed a computer. Most software barely works and you expect it to magically become a god/world-spirit/daddy replacement overnight? Who said "overnight"? I don't see that word used in the article. But regardless that's not an argument. Computer software often has bugs. The first AIs may also have bugs. It doesn't mean they can't be dangerous. It doesn't mean the bugs can't be worked out. Betting the existence of the world on software bugs is just insane. >This just makes actual researchers' lives harder trying to reproduce the work you're supposedly doing. This isn't good science and it's not helping anyone but YC/Musk/etc's publicity machine. Um what? Many, maybe even most, AI researchers don't release source code. OpenAI does release most of their code. He just said they won't release all of it if they actually get close to building an AI. Yes that may make other researchers lives harder. *That's the point.* >To work on our cult/vanity project that doesn't actually produce any real AI research OpenAI produces real AI research. >because gradient descent is magic. And again, the word "magic" does not appear in the article at all. No one things gradient descent is magic, just that the results it produces are very complex and hard to understand.
>Who said "overnight"? I don't see that word used in the article. "Overnight" in the scheme of technological progress. Research is slow, and especially in experimentally dense fields it can take years to make decent progress on a seemingly simple research question. The development of human-level AI is not a simple research question, and historical attempts at estimating how long it will take to make progress on it have been wrong by several orders of magnitude in the past (there are many more recent and varied examples but remember the Dartmouth Conference?). There is little reason to believe that our ability to estimate the amount of work required has gotten better. Claims that meaningful progress can be made in just a few years on such a complicated and poorly specified problem should be taken with great suspicion, and should be conditioned on the prior of us all historically having a very poor understanding of what and how long it will take to make progress in this field. > But regardless that's not an argument. Computer software often has bugs. The first AIs may also have bugs. It doesn't mean they can't be dangerous. It doesn't mean the bugs can't be worked out. Bugs or no bugs, the point I was making was about the time scale at which AI research operates and the current state of technology. On top of the already great difficulties we've historically had with estimating the progress of AI research, modern software engineering as a trade is notorious for three things: high project failure rates, the underestimation of project cost estimates, and the underestimation of project time estimates. I am not saying that it's unnecessary to worry about the future of automation technology or AI, but that the time scales espoused here and the projected successes are not founded in anything but wishful thinking in light of the current state of the art. > Betting the existence of the world on software bugs is just insane. I think we have very different positions on AI risk. I also think that much of the discussion centered around the existential threat allegedly posed by AI detracts from many of the more immediately real problems caused by growing automation in industry and bias in current ML systems, and gives the public at large a very skewed idea of what is possible in the near-future. I believe that this has a negative impact on the field as a whole. > Um what? Many, maybe even most, AI researchers don't release source code. OpenAI does release most of their code. He just said they won't release all of it if they actually get close to building an AI. Yes that may make other researchers lives harder. That's the point. My point is that there is in general a reproducibility problem with a lot of modern science (CS journals especially can wind up being mostly write-only because of an early reticence to release experiment code and data). A public figure claiming that there are altruistic reasons for this sort of behavior for reasons of alleged safety is, in my view, a bad thing because it normalizes this, again setting back the field as a whole. >OpenAI produces real AI research I was thinking more about their support of MIRI. If you want to talk about OpenAI I would contend that the goal of furthering AI research would be better served by providing grant funding for established research groups. > No one thinks gradient descent is magic, just that the results it produces are very complex and hard to understand. I was obviously being hyperbolic, but even so he explicitly states with regards to neural networks "you have _no_ idea what they’re doing, and they can’t tell you", emphasis mine. I don't disagree that sometimes it can be difficult to interpret the behavior of deep networks, but for a respected public figure to write off all of the computational learning theory work and all of the experimental work that has gone into trying to provide explanations for modern advances in machine learning is incredibly dismissive, and it gives the public at large a wrongheaded idea of what the current state of the art looks like.
>OpenAI produces real AI research. Yeah, one of their researchers commented about this article on Facebook recently. He said that their philosophy was pretty broad like the article mentioned, but they have a more robust set of tasks and things they are working on.
He sold his own software startup for 8 figures. He knows how to program a computer.
I didn't say he didn't, and I don't doubt that he does, it's just that his attitude towards AI programming is really naive and sensationalist and you'd expect someone with any amount of experience working with computers to know better.
Talking about AI several decades in the future is so different from programming or even machine learning today. It's like saying someone with a background in molecular biology should know about cognitive neuroscience.
My contention is that a lot of these guys _don't_ think it's several decades in the future, they think it's very near-future (for example Altman in the article posted claims digital full-brain reconstruction and simulation is less than 15 years out). You can either believe that superintelligent AI is a long ways off in which case yes you can absolutely claim that a lot of the problems with modern AI systems and software engineering are not going to be issues, or you can believe that it's near-term and therefore achievable with technology similar to what we use today. He does not do the former. He does the latter. I do not understand how anyone with any amount of practical experience in computing or a good understanding of the current state of the art could hold these opinions without acknowledging how ridiculously optimistic they are. I think a lot of these concerns and hype are wrongheaded and misplaced, and that they are detrimental both to the field of AI and to society's response to it.