r/SneerClub archives
newest
bestest
longest
Is there any serious AI researcher making these kind of claims? [taken from MIRI's FAQ] (https://i.redd.it/kcm43o1hygg81.jpg)
61

Probably. Being an expert in AI doesn’t make you an expert in health, public policy, or any other speculative futuristic field, and researchers have an incentive to exaggerate the societal impact of their work to get more funding and public support.

[yep](https://twitter.com/fchollet/status/1389337090278658052)
The tweet is a bit overwrought, but I do believe that ML tech will become standard in nearly every science, at least to a degree. It's a tool. It isn't a perfect tool, but it's pretty darn good. That said, saying every field will *literally be* compsci goes too far. I still find Yudkowski's statement far more ridiculous. I can give a generous reading to that tweet. Yudkowski's statement is fantasy loo-loo land nonsense.
Adoption is also going to be really slow. In political science people *insist* on frequentist regression for problems where they just don't apply -- because there's a lot of mathphobes here who still feel the need to use math to make their work appear relevant to a sometimes irrationally quantitative paradigm and they don't want to use anything they weren't forced to pick up in gradschool. I really wish more people would either admit they're theory people, or use Bayesian stats and SVMs when they're clearly better for a particular job. But idk if or when that's gonna happen. But yeah I fundamentally agree incorporating these tools into our basic repertoire is the future of science. But that won't make everything comp-sci, nor does it mean that ML methods are always going to be better than some good old stats. Sometimes, though, and they should start getting incorporated into basic quantitative toolkits.
ornithologists will just simulate birds instead of catching and ringing them
> Being an expert in AI doesn't make you an expert in health, public policy, or any other speculative futuristic field But it does make a lot of people *think* they are. Being at the center of a fad field can inflate self perception of general expertise.

Goofy, boastful statements happen in the field; Hinton himself once claimed that neural networks would solve every open philosophical and psychological problem within ten years. Suffice it to say that this was said well over ten years ago, and while artificial neural networks are super cool and useful, they have thus far not threatened the jobs of either philosophers or psychologists. Anyhow, yeah, a pretty silly thing to say, and Geoff Hinton is a) a legitimate, credible researcher in AI, b) a good, decent person. Less knowledgeable people with perhaps less decent instincts are going to say all sorts of stupidities.

Marvin Minsky thought we'd actually have a HAL 9000 by the year 2001, and said pretty similar things to this.
Pretty much all the big name AI researchers probably went into the field at least *wanting* to build a human-level general AI, even the ones like Rodney Brooks (more robotics, but still) who are skeptical that it will happen soon. Deep learning in particular is a hot bustling field now, but there were decades where it was seen as a dead end, decades that overlap with the early careers of those current big names. You probably wouldn't have gone into it back then unless you had a little romance in your soul and a bigger vision than "one day we'll use this to make really good online shopping recommendations."
AGI has always been predicted in the next 20 years. Some of the big name people like Turning etc also predicted AGI was just around the corner.

It really is just The Rapture for nerds.

Ayup. And not the awesome novel The Rapture of the Nerds by Cory Doctorow and Charles Stross, either. Just a really shitty bargain bin ripoff rapture, with all the Christian terminology filed off and replaced with tech industry shibboleths.

Does Ray Kurzweil count?

zero out of three
The man is serious about his supplements
i just looked this up... does he actually take 100 supplements a day?
A friend used to work at Kurzweil's office in the Boston metro area. There was apparently a tray in his office _covered_ with pill cups, arranged by time of day.
"Not that we needed all that for the heavy duty thinkernoutening, but once you get locked into a serious nootropics collection, the tendency is to push it as far as you can. The only thing that really worried me was the ether."

Sure, “could”, however it ignores the geopolitical facts of our situation. Global players WILL leverage AI to promote their own interests at the expense of their perceived competition. AI then just becomes another lever of power for the few that have access to the machines of governance. Technologies advance, human psychology not so much

'You could solve world hunger and cure all diseases' ['But I dont want to cure world hunger, I want to create better advertisements'](https://i.kym-cdn.com/photos/images/original/001/125/992/944.jpg)

I don’t see any extraordinary claims. It’s entirely possible supercomputers/superintelligences could be used, within a specified timeframe, to help solve these problems.

“Could” is not a particularly strong claim. It would be different if the claim was that these problems will be solved, especially within a specific timeframe. (For that matter, the opposite claim that these won’t be improved in any meaningful timeframe is a fairly strong claim as well).

I think the issue is that they are claiming the superintelligence could solve all the world's problems by itself, not that it could be helpful which is a pretty uncontroversial idea. The fact that they believe that a superintelligence alone is enough to solve these issues is the central problem for me. Also, i'm not a sociologist or a political philosopher but i think we kind of already know the causes of a lot of the world's problems, but these causes are way too complex to be solved just like that.
So, I think there are multiple ways to read this. One is the sort of "*foom*" superintelligence where rationalists theorize there is some sort of "tipping point" of meta-intelligence, where an AI knows just enough to teach itself to learn better, which it applies to itself in a loop. All someone has to do is let this job run on a supercomputer over the weekend, and then, armed with near limitless computational resources and no need for sleep, it's able to grow from babylike intelligence to a full-grown human adult by Monday. By the end of the week, it's become Skynet. That's the more fanciful sci-fi version of how the AI singularity is going to happen. The more "steelman" version is that over a series of years, machine learning models become used more and more to find unusual insights into increasing larger subproblems, that over the span of decades they become more and more autonomous in solving those problems. I feel this is more or less bound to happen. I think though that the part where AIs start solving geopolitical or philosophical problems rather than just scientific or technological ones, is the part where I get off the bus. Those problems are the one where it's *far from clear* that what's holding us back is a lack of computational power to chew through large data sets.
TBF, I personally could solve all the worlds non-scientific problems in a few weeks. And I'm barely smarter than average. Iow, it's not hard to come up with organizational solutions. Implementing them is impossible, unfortunately.
These problems have already been solved. The solutions are inconvenient to Capitalism so they are not implemented.
World peace is a bit more complicated and well it isnt just capitalism holding us back from that. Which isnt to say that capitalism isnt holding us back btw just that it isnt quite as simple as that. I look forward to the various modules of the distributed AGI fighting each other for resources and not realizing they are part of the same system due to coding errors, network errors, lag, latency, malicious and well meaning hackers etc. We cant even build a system which accurately shows you how much views a YouTube video has had, AGI will be wild. (I mean, in a way isn't [this a form of digital schizophrenia](https://www.theverge.com/2017/4/12/15271874/ai-adversarial-images-fooling-attacks-artificial-intelligence))
Billionaire-funded AI scientists: Tell us, O Great AI, how we might achieve lasting world peace AI: Place all economic production under the democratic control of the working class as such. This will eliminate all incentive for wars over profitable resources. Billionaire-funded AI scientists: Wait a second,
Uh-huh.

We know the answer to most problems. Problem is those who can, do not want to solve them.

None that I know. For the most part they:

a) think human in the loop mechanisms are important for most practical AI and would be incredibly wary of anything this automated.

b) think the tech (Deep Learning is the current buzzword but according to the ML peeps I know it just kinda cycles through things in the media if you’re around long enough) has been criminally overpromised and we’re no where near the capacity for something like this to just happen, intentionally or unintentionally.

Idk if I know a representative sample of AI people, but the people I work with and respect would tear this to shreds.

All well & good but I can’t imagine the US approving it’s use if China was the country that invented it.

On top of that the US economy is reliant on peace NOT breaking out. So zero chance of letting it happen.

All of these problems have been solved. Some repeatedly. The solutions are inconvenient to capitalism so they are not implemented.

I’m confused, these all seem like trivial claims. A superintelligence is defined as being smarter than a human, so of course it would outperform humans.

That's not trivial at all, this comes from the thinking that IQ=superpowers. There is no guarantee that an AI that is super-powerful at mathematical discovery will also be super-powerful at geopolitics. Many of the problems listed require cooperation of lots of different entities with competing interests, the idea that one single very smart AI can solve them is very fanciful.
But it’s not saying that every AI can solve every one of these problems. Of course intelligence is domain specific, but cross domain intelligence is part of what makes us consider it AGI. And a floor on the theoretical upper limit is at least as smart as the smartest human in every domain, unless you are claiming that human intelligence comes from non-physical processes and cannot be simulated.
The smartest human in geopolitics cannot solve world peace. Whereas solving world hunger does not require a genius at all, rather a more just distribution of resources. These are political problems, and they require political solutions. The idea that they just need more "ingenuity or processing speed" is ridiculous.
But creating a more just distribution of resources requires convincing convincing world leaders to agree to that, which is something that intelligence does help with. Or simply acquiring resources to be distributed. I feel like you are rejecting anything that doesn’t show up on an iq rest as intelligence, then saying that intelligence isn’t important because solving an iq test doesn’t matter
This is just not how how politics works. Do you think if we airdropped resurrected einstein into Afghanistan he would be able to persuade the Taliban to respect women's rights?
Now im imagining somebody going 'I bet if the taliban had a female Einstein to make them a nuclear bomb they would respect womens rights' Brb, gonna have to be mad about this person I just imagined on twitter.
No? Again, there is a difference between being good at STEM type things and the broader concept of intelligence. To my knowledge Einstein was not some amazing negotiator, nor would he likely have the resources necessary to accomplish the task (though with enough time and intelligence he could acquire them). Do you reject that in the fictional scenario where you had magical omniscience and knew just the words to say and who to say them to, that you could get the Taliban to respect women's rights just by talking? It seems obvious to me that the answer is yes, it is possible, and the only reason we can't do that is that we aren't smart enough. Would be interested to know if you think otherwise.
The key words in that last paragraph are “fictional” and “magical”. These beg the question. Sure if you assume the conclusion you desire, then the answer is always obvious.
So you agree that there is some set of words that could be said to some set of people that would convince the Taliban to respect women? If you do, then it's a matter of finding one such set of words and people, which is a task that having superhuman intelligence would help with.
>So you agree that there is some set of words that could be said to some set of people that would convince the Taliban to respect women? No, I don't. I'm sure there are words that would convince some individual Taliban members to respect women, but not the leadership, who have fundamentalist beliefs as an axiom of their identity. And how exactly would you know which members are susceptible to particular messaging? You can often only get one shot. Remember superintillegence and super omniscience are not the same thing. The other thing missing here is an analysis of power. If you try your shot in the wrong way, the taliban can just kill you, and it's game over. They can kick you out of the country and refuse to listen to anything you say. Intelligence without power or influence does nothing.
To be clear, I'm not imagining just saying words to individual Taliban members, the set of words spoken to a set of people might include words to the president about how to apply political pressure to achieve the result, and convincing him to do it, words to activists to drum up sufficient support, and so on. I also should clarify that I do not imagine it is possible to change everything overnight, but rather on a timescale of years. If you still disagree given those clarifications it seems that we disagree about what is even theoretically possible to accomplish socially, let alone whether a sufficiently intelligent entity could actually go about achieving those accomplishments. Perhaps if we scaled it back to simply improving the quality of life under Taliban rule, you would agree that the extent to which you could change things is greater when you have increased intelligence? Regarding power, I agree that it is necessary to achieve many goals. However, you can't ignore the fact that intelligence can be helpful for obtaining power, or that those with power would be able to use intelligence to achieve goals more effectively.
I've never stated that intelligence isn't helpful towards goals. I'm arguing against the notion that individual entities can solve global geopolitical issues on their lonesome just by thinking super hard. Individual actions can assist in pushing the tide of history one way or the other, but there are 6 billion people on this planet. If they decide to push one way, the smartest individual ever to exist cannot help but be swept along with the tide.
I think I understand your argument a bit better now. I'm willing to concede that you could reasonably be correct for many problems if we limit ourselves to just a human with the ability to think real good, without expanding a human's capacity for sensory input or output into the world. Realistically (ha) an AI would have access to the entire internet at least (both for input and output), plus probably some drone footage or similar. It could simultaneously be working on all 6 billion people at once, matching their output. This is in some sense not really being more intelligent, just having more throughput. (perhaps its not very interesting to go back to the beginning of the conversation here, but the claim that having intelligence makes you *more* capable of achieving these goals is what I called trivial, and it seems (correct me if I'm wrong) you just agreed with that, so I'm unsure what your issue was with the initial claim) Warning: rambling ahead, feel free to ignore. Geopolitcal issues are hard to solve because there are many actors with many different goals, and you need to convince basically all of them to act in order to get anything done. Starving people in africa is basically a coordination problem, most people want it taken care of, few people want to bear the burden, but if everybody helped the burden would be light. You can look at this and say that no single person could solve the problem. That is true in a sense because many people do have to work together to get anything done. But I don't think that this zero sum view is a very helpful way to assign blame/credit. You need to look at what happens if you introduce a person into a system or remove them, and compare the results in order to get a good sense of whether they can solve the problem "on their own". But if introducing a very persuasive person into the system causes large numbers of people to change their minds then they alone caused the change, *even if* other people also caused the change "on their own" (because removing them from the system would cause a failure. It is easy to see a movement and see that millions of people are participating so that no single person is important however influential, but I contend that this is the wrong way to think of it. Conversely, someone could appear to be the head of a movement and be very important, but if you removed them someone else would take their place so they're contribution did not matter in this case. All this is to say, with the right definition of what it means to do something "on your own" it is significantly more plausible for it to happen then if you used another definition.
I have a problem with the ideas of super-persuasion in general, because it generally ignores how people have both convictions and material interests that overpower any attempts at argument. A super-superintelligence, using words alone, could not persuade persuade hitler to convert to judaism, or conversely, persuade martin luther king jr to join the KKK. Perhaps there is some sequence of words out there in the infinite possibility space that would persuade the Koch brothers to donate all their money to LGBT charities, but you don't have infinite attempts, you have a handful at most before they just block you. It's impossible for practical purposes, even for a super-AI. Now, the rejoinder could be that you use the intelligence to gain a lot of power, and then use pressure to achieve your goals. But then the *intelligence* isn't the one doing the work, it's the *power*. Even an average intelligence person could achieve a lot if they had the power to control all 6 billion peoples internet, for example. I'm sure you could achieve world peace as an AI by becoming world dictator and ruling with an iron fist, but is that really the goal of the AI developer?
Sure we have a superintelligence, but what about supersuperintelligence? (I'm joking here but also not, see also the theoretical systems which can do more than turing machines, hyperturing machines, which have further classifications of their own. Not all that relevant apart from just going 'it is defined as smarter than a human' isn't that useful a definition (and hypercomputation has the same problems as AGI, and my degree in physics, [it is all theoretical](https://external-preview.redd.it/jCIeV1H439WRMSlrc4Xd6sU0LMAjGiZ-k-myf2YFuyE.jpg?width=1024&auto=webp&s=1925a4f16061df4cc675dd9bef42d6407b0427ee) (and also just want to brag a little bit by dropping a reference to actual computer science in the thread)).
Sure, superintelligence isn’t all that technical of a term. There is an extremely wide range of things that could fall in the classification, which is part of why it’s easy to make claims about what it might be able to do.
Sure it is easy to make claims, but these claims aren't useful.
Never said they were especially useful claims, just obviously true claims. The quoted text is mostly just fluff unless you are not familiar with ideas of what AI can do.
> obviously true Yeah sure, because we picked the assumptions. Doesn't make the assumptions true. Useless, all useless.
Which untrue assumptions is this making? The only contestable claims made are about the existential risks being, in fact, risks. That doesn't really detract from the main point about AI
Most people with extremely high IQ scores have not achieved anything special.

thanos had a solution to these problems, & the basilisk agrees but suggests doubling down, so yes