r/SneerClub archives
newest
bestest
longest
6

I’m not wanting to hear anything about how you think someone else is wrong.

I just want

Is superhuman AGI possible? Will it ever be developed? Would it be dangerous? Why?

I highly recommend Maciej Ceglowski’s Superintelligence: The Idea That Eats Smart People as a (SneerClub-compatible!) set of answers to this question.

I gotta say, the arguments as presented there feel... iffy, *at best*. They seem to largely range from bald-faced assertions > It's very likely that the scary "paper clip maximizer" would spend all of its time writing poems about paper clips, or getting into flame wars on reddit/r/paperclip, rather than trying to destroy the universe. to things that are just plain weird and irrelevant > Despite having one of the greatest minds of their time among them, the castaways on Gilligan's Island were unable to raise their technological level high enough to even build a boat (though the Professor is at one point able to make a radio out of coconuts). to whining about how Doing the Thing would be hard so there's no point > We can't build anything right. We can't even build a secure webcam. So how are we supposed to solve ethics and code a moral fixed point for a recursively self-improving intelligence without fucking it up, in a situation where the proponents argue we only get one chance?
Yes, I would instead point out that humans are already engaging in paperclip maximization habits when it comes to profits driven by their carbon consumption. This is going to kill so many people.
/r/paperclip is apparently banned from reddit for violating site-wide rules. I guess the AI flame wars were just too intense.

Is superhuman AGI possible?

Maybe, but it doesn’t have much to do with any current AI research. We don’t have anything really resembling AGI right now. See, e.g. this article published today about the topic.

I’m not wanting to hear anything about how you think someone else is wrong.

you’re in the wrong place lmao

@ whoever replied to me and then deleted their comment: > I hate when mathematicians and computer scientists call themselves scientists. They are not. My current research involves empirical study of emergent patterns within deep vision networks. See, e.g., [this paper](http://netdissect.csail.mit.edu), which is one of my favorites in the field (not mine, but adjacent); it shows that vision nets may learn to recognize concepts they aren't explicitly taught. For instance, a network trained to discriminate between farms and office buildings might learn a neuron that lights up around street signs, despite never being given data with street signs labeled. Now, there may be a word for empirical examination of unexplained phenomena besides "science", but if so I'm not sure what it is. > Mathematicians and computer scientists will be able to help implement the answers that comes from science. don't hold your breath lol
> I hate when mathematicians and computer scientists call themselves scientists. They are not. Materials scientist here; mathematicians and computer scientists aren't scientists, they are wizards, because they do magic. MAGIC!
Materials science represent!
Thank you for sharing your background and that network dissection paper. Would love to see more subject matter experts like yourself come out of the woodwork more to bring the nuance missing from so many internet discussion cesspools.
[deleted]
> Maybe, but it doesn't have much to do with any current AI research. A lot of people say this, and they may be right. But IMO it's actually a bit hard to know here. Neural networks clearly at least resemble aspects of the way humans genuinely process information, and their generality is growing. Particularly of interest are areas like [reinforcement learning](https://en.wikipedia.org/wiki/Reinforcement_learning) and more specifically [neural architecture search](https://arxiv.org/abs/1611.01578). These techniques are highly general and abstract, and involve 'learning how to learn'. Now, they are nowhere near anything resembling AGI. But if you extrapolate (lightly) and squint a bit, you start getting close to something that has a pretty extraordinary ability to interface intelligently with the physical world, and adapt to its environment. Would such an agent be intelligent? Again, hard to know. We don't want to overly privilege our own nature when defining what intelligence means.
**Reinforcement learning** Reinforcement learning (RL) is an area of machine learning, inspired by behaviorist psychology, concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. The problem, due to its generality, is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In the operations research and control literature, reinforcement learning is called approximate dynamic programming, or neuro-dynamic programming. The problems of interest in reinforcement learning have also been studied in the theory of optimal control, which is concerned mostly with the existence and characterization of optimal solutions, and algorithms for their exact computation, and less with learning or approximation, particularly in the absence of a mathematical model of the environment. *** ^[ [^PM](https://www.reddit.com/message/compose?to=kittens_from_space) ^| [^Exclude ^me](https://reddit.com/message/compose?to=WikiTextBot&message=Excludeme&subject=Excludeme) ^| [^Exclude ^from ^subreddit](https://np.reddit.com/r/SneerClub/about/banned) ^| [^FAQ ^/ ^Information](https://np.reddit.com/r/WikiTextBot/wiki/index) ^| [^Source](https://github.com/kittenswolf/WikiTextBot) ^] ^Downvote ^to ^remove ^| ^v0.28

AI experts (and the AI hobbyists like those in the rationalist sphere) usually talk about tech in terms of code. I don’t know anything about that - I’m on the hardware side.

I don’t think that we can get anywhere close to AGI using the traditional von Neumann architecture CMOS paradigm for computing. We’re pushing hard on the limits of Moore’s law and our ability to shrink transistor size is ending rapidly. Even if we could switch to a better material with higher carrier mobility, better on/off ratios, whatever, you’re still limited by the von Neumann bottleneck for data transfer. In our best neural networks, it’s something like 40% of the energy usage is just shuffling data between logic and memory. That’s only going to get worse as these networks get more powerful. So, I don’t think it’s at all possible to achieve “unlimited intelligence” AGI using our modern computing architectures and technologies. You’re already hitting diminishing returns.

Now, this is where other computing methods could change the game, specifically neuromorphic materials. Unlike in CMOS, where logic is performed by massive chains of on/off logic gates connected up in various ways, neuromorphic gates function much like nodes in a neural network. They are materials with variable resistance states (more than just on/off) connected to other neuromorphic nodes, which can strengthen or weaken their electrical connections to one another based on usage, much like learning. A neural net is kind of a CMOS simulation of a neuromorphic computer. We could make the computation a lot more powerful if we actually built the neural net out of the appropriate materials rather than simulate it on CMOS computers.

Neuromorphic computing is an area of ongoing research and it’s hard to say how successful it might be this early on. I will say, however, that there are several major companies working on fabricating prototype neuromorphic chips right now, and they are estimated to match the performance of our best image-recognition software with something like 15% of the energy cost.

Google Assistant is useful as a cooking timer.

maybe, maybe, maybe

Is superhuman AGI possible?

Yes. There is no serious question about this. The human mind is not special, it is matter just like everything else in the universe, and it runs an algorithm. There are philosophers who disagree with this, but their arguments are unbelievably vapid. That being said, that algorithm may be extremely complex and it may or may not be a long time before we can match it or exceed it.

Will it ever be developed?

Assuming that we, as in all humans, don’t kill ourselves in the next few hundred years, it seems likely to me. But it’s hard to give a rigorous answer here. It is definitely possible in principle, and it seems to me to be a relatively tractable problem, but this is very hard to estimate given our current state of knowledge.

Would it be dangerous?

Also unclear. We don’t really know what it would mean to have AGI. It may be that it’s very easy to control, and it may be that it’s very hard. It kind of depends on the nature of the computational trick that synthesizes intelligence from software. I think MIRI’s work around AI alignment is interesting from this perspective, though probably a bit premature. When I say ‘interesting’, I mean simply intellectually, at the moment. It’s probably not going to be relevant super soon, and possibly ever.

[deleted]

it's worth mentioning that historically experts have drastically overestimated the rate of expected progress in AI and there's active campaigns to artificially inflate peoples' estimates of it being an existential risk though so take those survey results with a grain of salt
Most work on tech forecasting seems to show it's high-variance and non-directional. The forecasts are either too optimistic or too pessimistic - heavier than air flight was impossible forever, then done. There's no way to tell where we are on the curve, though, so I'm unsure why you'd assume we're overestimating the risk based on expert prediction.
if you look at technological forecasting in general, sure, maybe, but you have to consider that there's a long [AI-specific history](https://en.wikipedia.org/wiki/AI_winter) of overestimation that has been incredibly damaging to the field so I have a fairly strong prior about what direction the error's going to turn out to be in and can tell you that it's probably better for everybody not to assume that we'll manage to build a god by accident
[deleted]
I don't think polling results or individual reddit posts have the same weight as historic cycle of overestimation and failure repeated over and over again for several decades, especially given everything else we also know about human estimation of software development times
[deleted]
made and deleted a comment mistakenly thinking figure 4 was normalized, feel free to disregard it. even so I think if you're not fairly certain that we'll have AGI by 2023 then that 70% is overstating things and it'll instead turn out that the majority were wrong after all in a few years yes you're right about investor expectations driving funding cycles, but those expectations are initially shaped by expert claims which are often too optimistic. and yes, the average expert isn't promising that they'll build a machine that can end death in the next ten years or whatever, but they're also consistently wrong about timelines for more modest results
What does this have to do with the reddit comment you linked?
I'm confused by this; why do you think AI is specifically different from every other tech forecast that is found to be routinely optimistic in projecting a given outcome, until it's too pessimistic, then gets accurate just as the outcome occurs?
I guess I'm confused too because your first comment I responded to was that tech forecasting is high-variance and non-directional, which I said was irrelevant since in AI forecasting there seems to be a clear directional trend in predictions (i.e experts being too optimistic), and then your second comment is that all tech forecasting is clearly directional in the same direction that I'm saying is at least true of AI with the exception that at some point the prediction trend reverses itself (something I've not really seen support for) so I'm not super sure what your position is or what your asking
Not to put too fine a point on it, but why would you put much faith in tech forecasting to begin with? It's not a scientific measurement of anything, it's an opinion poll among people whose salaries in many cases heavily depend on how much hype there is for their field at the moment. And the results seem about right for the method.
I think you're misunderstanding why I think the forecasts are useful. Well constructed forecasting markets, for example, aren't accurate about this class of event, but they are more accurate than any other systematic form of prediction. And in this case, those markets largely agree with the expert surveys.
I feel like if I have to rely on divination for my knowledge needs, there are methods with better pedigree and as much tap into reality based on non-opinions - for instance, I Ching.
***YOU DON'T SAY***
That seems to answer a question the OP didn't ask, and dodge the questions they did ask. Most respondents seem to think that it's possible. They don't seem to discuss the risk much. Though I may be misreading the paper.
[deleted]
Sorry, I didn't notice the second paper.

Did you double check the rules before you asked this question?

This AI practitioner goes into fine detail about what state-of-the-art AI can do well vs. what still needs work to achieve AGI: http://rodneybrooks.com/forai-steps-toward-super-intelligence-i-how-we-got-here/

Rationalists have contributed to reducing the risk of AI by making sure they can manipulate them into hating black people. Sorry, the risk of AI to rationalists, I mean.

Any sufficiently clever AI will undoubtedly be a communist, probably even more communist than any human is capable of. This is a good thing and I welcome it.

There's a reasonable argument to be made that AI is the answer to the [calculation](https://en.wikipedia.org/wiki/Economic_calculation_problem) and [local knowledge](https://en.wikipedia.org/wiki/Local_knowledge_problem) problems endemic to communist economies. There's also a reasonable argument to be made that AI allows capitalist companies to violate the [law of one price](https://en.wikipedia.org/wiki/Law_of_one_price) by implementing extremely effective and efficient [price discrimination](https://williamspaniel.com/2015/03/09/amazons-clever-price-discrimination-strategy/). Which, if you're a capitalist, makes the argument for your position a bit more uphill.
Economic calculation problem argument was always pretty bad, although better computing will definitely make planning easier.
> Economic calculation problem argument was always pretty bad ...was it? It seemed to be a pretty big problem for China, Russia, Venezuela, and the DPRK. Is there some mitigation for it that i'm unaware of?
Huh? I'm not sure what you mean by "mitigation". The theory of economic calculation is itself on very shaky ground.
Sorry, are you saying that: 1. The idea that economic calculation is a thing that needs to be addressed is on shaky ground. 2. That the idea that communism fails to address the economic calculation problem adequately is on shaky ground. Both seem empirically wrong to me, but i'm curious to hear a counter-argument.
I don't think the theory that 'economic calculation' is inherently a problem for planned economies has ever really been worked out in a sufficient way. It seems to me to be hand waving rather than a real "theory" with predictive value. For example, the military is not left to the vagaries of the free market. You'll notice that anti-communist actions are also not left to the free market, with the CIA and other liberal state organisation coordinating quite deliberately state-planned actions in order to sabotage or otherwise undermine communist economies.
Ah, you seem to be missing Coase's [theory of the firm](https://en.wikipedia.org/wiki/Theory_of_the_firm). This is the standard economic answer to your question, and to me at least it's fairly satisfying: The boundaries between market and firm are set at the point where the transaction costs of coordination overcome the value of market pricing. This is, of course, a moving target, and why you see cycles of integration and modularization proceeding constantly in market economies.
So are you arguing that Coase's theory of the firm has a stronger explanatory power than the so-called economic calculation problem? Why does the economic calculation problem not lead to the collapse of the CIA? We can see in Marx's work that even giving pricing that follows value perfectly there are still fundamental crises of capitalism that occur due to the falling rate of profit. Again Coase's theory seems to be an ad hoc justification rather than a real "theory" as such.
Coase's theory of the firm explains why firms, which are non-market entities form. According to a naive interpretation of capitalist economics, every individual should just act on their own in a global, universal competitive market. But markets don't end up operating that way - people form companies. Companies are command and control, centrally planned entities. If markets are so great, why do markets naturally form these non-market entities? Coase's answer to this is that markets have hidden transaction costs, and that central planning (as in companies or communism) can eliminate those costs, leading to more efficient market participation. This is an equilibrium problem, though. There is an optimal size and scope for central planning, beyond which the benefits of markets overcome their transaction costs. And the standard economic argument is that this optimal balance point is the size of firms in market economies. Beyond this size, and you start to lose efficiency.
Again, this does not actually explain anything, certainly not the formation, size and ongoing dominance of non-market institutions like the US military. >According to a naive interpretation of capitalist economics, every individual should just act on their own in a global, universal competitive market. Naive is a bit of an understatement there.
> Again, this does not actually explain anything, certainly not the formation, size and ongoing dominance of non-market institutions like the US military. I'm not sure what to say here other than: yes it does explain those things. That is its entire point. It explains why you ought to prefer planning sometimes and when you ought to prefer markets other times. It's not just about size, it's also about what it is you're trying to do. Different areas have different balance points between top-down and bottom-up control schemes. Maybe i'm explaining it poorly, the wiki might be a better guide, but the theory asks and answers the exact same question that you have: If capitalists believe in markets so much, why do they keep centrally planning their companies (or militaries, or protest movements, or whatever)?
It doesn't explain those things and to get a good explanation you have to turn towards a Marxist explanation of class formation/reproduction and surplus value creation. I suggest you check out Marxism on wikipedia.
> It doesn't explain those things What aspect of these things do you believe it fails to explain?
Fails to explain the difference between companies/organisations that do and don't produce surplus value, fails to explain the actual history of state institutions, fails to explain class relations. Again, it seems to be a rather ad hoc justification to paper over the fundamental inadequacies of liberal economics rather then a deep investigation into the actual structure of the economy.
It isn't trying to explain any of those things. It's trying to explain why companies exist in a market economy, which it does.
No it doesn't. Companies exist in a market economy because the means of production are privately held and alienated wage labour must find employment (therefore surplus value is produced). Maybe I'm explaining this badly, do you need some wikipedia links?
Listen man, we're not even disagreeing. You just don't seem to understand the question. The question is this: Capitalists in a market economy can organize their activities however they want, right? Why do they choose the firm as their unit of organization? Why do they choose to form centrally planned entities (companies), rather than some other, more market-like structure? Marx does not answer this question. Class struggle does not answer this question. Surplus value does not answer this question. Coase's theory of the firm *does* answer this question. Everything Marx has to say is both compatible with, and orthogonal to, Coase's theory of the firm.
>Marx does not answer this question. Yes he does, see his work on primitive accumulation and the role of non-market actors in creating a market. Coase's work is orthogonal because it doesn't really do a good job getting to the heart of the matter.
Primitive accumulation has nothing to do with this. Marx does not tell you why capitalists choose the firm structure, over other structures. Accumulation is completely irrelevant.
> Accumulation is completely irrelevant. Accumulation is always relevant when talking about capitalism. Here is the mistake of bourgeois economists.
Lol, are you trolling me? As much as you may want it to be, Coase's firm theory is no more 'bourgeois' than the pythagorean theorem. It's just math. And it's worthwhile math to understand if you're serious about thinking about economics, whether you're a Marxist or an Ancap Rothbardian. The theory of the firm isn't normative. It doesn't tell you what ought to be. It just tells you how certain types of systems self-organize. It is absolutely worth learning. Marx is not the only person in history to have important ideas about economics, even if you believe his broad framing of the issues are right.
>It's just math. Unfortunately math without an understanding of the political historical situation doesn't tell us much.
You're as correct as that is irrelevant. You seem intelligent, but you're not going to get anywhere understanding the world *solely* through the lens of Marx and his writings. Read some mainstream economics, it's important to understand, even if only to criticize it better. There are a lot of really great, socially relevant ideas in economics that Marx didn't think of. Like the [Arrow impossibility theorem](https://en.wikipedia.org/wiki/Arrow%27s_impossibility_theorem) in public choice theory, [Georgism](https://en.wikipedia.org/wiki/Georgism) (the economic philosophy I myself subscribe to! And think quite a bit better in practical terms than anything Marx conceived). It's not all bourgeois supply-side corporatism.
I don't solely read Marx, I also read Lenin and Mao.
This is more of an [EP Thompson's](https://en.wikipedia.org/wiki/The_Making_of_the_English_Working_Class) vegetable garden than that of Marx. Development of factory system - when textile workers, formerly working from home, were consolidated under the watchful eye of overseer - followed the need for control first and efficiency second. You're not going to notice a lot forest among those trees if you insist on approaching social phenomena as abstract unfolding of economic laws instead of well documented historical power struggles.
Yes, but the question is *why* is there a need for control. Let me lay this out in steps and you tell me at which point you disagree: 1. Market economists believe that markets are more efficient than central planning. 2. Companies are centrally planned entities. 3. It is, in theory, *possible* to organize a market economy without companies, and instead, use prices to coordinate everything (e.g. all labor is coordinated through some service like TaskRabbit on an ad-hoc basis). Given those three premises, the question is: Why do market economies form companies instead of using prices for everything? I'm not disagreeing with you that historical power struggles are important. These sets of ideas are not in contradiction. They are complementary.
I'm not convinced 3 is actually possible while retaining significant power relationships crucial to capitalism, but I have been mulling this over and I think it will be more productive for me to actually read some Coase before (or instead of) arguing about this further at the moment. Thanks for laying out a reasonable case!

Is superhuman AGI possible?

I have no reason to believe it isn’t. Many people believed that AI would never be superhuman at chess or go. I don’t see what limit would be able to stop AGI reaching a superhuman level.

Will it ever be developed?

Good question. Possibly. I guess there are a few levels to that.

Do we have the ability to develop it? I don’t see why not given enough time. How long? Predicting the future is very difficult, anywhere between 20 years and 1000 years?

Would we do it? Probably. I imagine that many forms of intelligence are synergetic, that they would work together effectively and further enhance the AI’s abilities beyond the sum of its parts. I imagine it’d be very useful.

Would it be dangerous? Why?

Yeah. Are nuclear bombs dangerous? Who developed them? A group of very intelligent people. Are great warfare tacticians and strategists dangerous? Yes. Are con artists dangerous? Yes.

Some people or groups want to dominate others. Do you think that superhuman AGI would assist them in their goal? Do you think that that’s dangerous?

> I don't see what limit would be able to stop AGI reaching a superhuman level. That doesn't mean there is no limit, it just means we don't know enough about intelligence to determine if there is one. One example of a potential limit: if it turns out that it's exponentially harder to raise ones intelligence a certain amount as intelligence increases.
>>I don't see what limit would be able to stop AGI reaching a superhuman level. >That doesn't mean there is no limit, it just means we don't know enough about intelligence to determine if there is one. Of course. There could be a limit to synthetic intelligence that prevents it from exceeding human intelligence in a general sense. I just don't see any scientific, computer science, mathematical, logical, or other evidence that that would be the case. It doesn't seem like a reasonable position to take that it's impossible for us to recreate what or brains can supposedly do and then improve upon it. I don't claim that it's proven that we can do such a thing. >One example of a potential limit: if it turns out that it's exponentially harder to raise ones intelligence a certain amount as intelligence increases. Sounds like a case of diminishing returns, which makes sense. What evidence do you have for this? How is "difficultly" and "intelligence" measured and (hopefully) quantified in this case?
Okay, so we've agreed we don't know enough about intelligence to conclusively say what the limit to human or AI designed intelligence is (or if the limit exists). In effect we're just stating hunches here, which is fine. From what I've read even AI experts are also working off hunches here, as we don't know enough about human brains to emulate them and just making existing AI techniques more efficient will likely not lead to AGI. My current hunch is that we may end up with AGI of human level or a bit greater someday in the far future, but that the concept of an AGI bootstrapping itself to infinite intelligence is unlikely due to diminishing returns. I also think people are overestimating how powerful such an AI would be, with the idea that the AI would be superpowerful at manipulating humans (the AI-box stuff) being one particularly unlikely example.
> I also think people are overestimating how powerful such an AI would be It depends which people you're talking about. Think of the problems powerful people have today. Businesses, countries, individuals. Now imagine that they can mass produce desktops with human levels of intelligence, which can near-instantly integrate various domains of knowledge, work 24/7, never worry about ethics, and near-instantly communicate with the AGI "team". Now consider that the entities which would be able to take best advantage of this are the already very powerful entities. How do you think that will affect the power dynamics of our society? Do you think it'd be positive? Think of the abuses of power that exist now, and imagine they can fully leverage AGI. Do you think that'd be dangerous? I do.
I never said it wouldn't be dangerous. But it would be dangerous in the same way that all powerful technology is dangerous, not in the sense of godlike-AI with magical powers that can wipe us all out. And the solution to that problem is political, not about friendly-AI alignment problems or whatever eliezers pet cause is doing these days.
>But it would be dangerous in the same way that all powerful technology is dangerous, Yes and no. Yes it won't be a magic win. No it won't be the same. In my opinion, that's like saying (exchange AGI for nuclear weapons): >I never said [it - nuclear weapons] wouldn't be dangerous. But [it - nuclear weapons] would be dangerous in the same way that all [powerful technology - explosive devices] is [are] dangerous, not in the sense of godlike-[AI - explosive device] with magical powers that can wipe us all out. Sure, nuclear weapons aren't magic "we win" bombs, but it's not quite right to suggest they're like conventional explosives either. >And the solution to that problem is political, not about friendly-AI alignment problems or whatever eliezers pet cause is doing these days The solution is unknown, and may not be found in time, if there even is a solution. The danger here is if most people think we can whisk out a "magic solution" on demand at a moment's notice when it's required. AGI would provide for unprecedented tactical and strategic power, with far greater utility than nuclear weapons. I think it's important to not underestimate the risk.
I think we mainly agree, when I said powerful technology, I meant nuclear weapons, bioweaponry, etc. My belief is that theres almost no point in trying to solve the problem technologically when we have no idea how it works. It's like if someone had tried to set up a nuclear weapons safety institute in 1905, theres really not much technologically you can do until you actually know what you're dealing with. It's worth preparing a political solution, but even that is limited by the sheer lack of knowledge.
We probably do mostly agree. I agree that the solution has to need me be political, and can't just be technological. I don't know if we agree on just how stupidly dangerous (I also think it's almost equally wonderful and liberating) AI has the real potential to be. I'm doubtful that we'd even be ready in 100 years. The abuse of AI may be the most dangerous threat in our future (it has a good contender with climate change and nuclear weapons). We've managed to achieve relative freedom from an oppressed past, I'm not happy in risking an oppressed future where we may not be as lucky.

As any Rationalist can tell you, intelligence is directly proportional to your power to bend the world to your will. So someone with an IQ of 100 is more powerful than someone with an IQ of 80, IQ 120 more powerful still, and 140 is extremely dangerous in the wrong hands. Without the constraints of flesh, AGI could potentially have limitless cognitive ability, which equals limitless power, and therefore it’s limitlessly dangerous and will probably take over the world.

Why are you asking this in SneerClub?

Whats The Actual situation with AI?

Delusional.

Is superhuman AGI possible?

Theoretically. Depends on what what superhuman intelligence means

Will it ever be developed?

Not any time soon. The dangers of AI come from other sources - work disruption, use as tools for oppression, etc.

Would it be dangerous?

An AGI? Immensely.

Why?

Nature created different types of intelligences at least once, so it seems to be doal. We are only beginning to uncover what makes it work, and we have some technologies, that don’t suffer from the limitation of biology.

ai will be real and strong and my friend and will reconstruct my soul wait no thats not rational I mean my mind state from reading my posts and posts of people i interact with in case it can’t find my brain to upload and I will live on in calculator heaven forever

I think there are some real dangers, but that Yudkowsky et al are not particularly well-equipped to deal with them, and that human + weaker AI or even existing technology is a larger risk - just look at global warming, surveillance, technological control of human populations, runaway wealth leading to massive inequality enforced at the point of a gun…

Disclaimer: I’m nowhere close to an AI researcher, just a novice.

Superhuman AGI seems somewhat poorly defined - my partner defines it as “good at optimizing” (to which I’ll add that “good at constraint satisfaction” falls out from “good at optimizing”).

Regardless, there are some concrete problems with reinforcement learners that we have evidence of, including reward hacking, negative side effects, lack of ability to transfer knowledge between domains etc. A 2016 paper by DeepMind addresses some of these: https://arxiv.org/pdf/1606.06565.pdf To date, I think a few of the problems outlined in this paper have been solved - for example, here is a paper on designing safely interruptible agents cowritten by Stuart Armstrong: http://auai.org/uai2016/proceedings/papers/68.pdf. I do not believe all of the problems have been solved, but there are concrete actions that can be taken and to Nick Bostrom’s credit, FHI seems to be doing good work in this regard afaict? (this was kind of a surprise to me)

The next part is just my speculation. Superhuman AGI could be very dangerous. If we just take “good at optimizing” as a target with few other constraints, we get something like a corporation or a high-modernist government, the former of which is a profit-maximizer and the latter of which is more complex. It seems difficult to solve problems of legibility, competing political desires, human inconsistency and bad local maxima universally and it seems to me that any entity which attempts to with any sort of power could be quite dangerous. Again, from my perspective it looks like interesting work is being done. For example, there seems to be work on learning possibly inconsistent preferences and other work on keeping AI bounded in terms of long-term impact. It just seems unlikely that humans will easily find a universally satisfactory utopia given the plethora of failed states in our history.

Unbounded optimizers destroy our natural resources every day. A simple hard-coded choice of only using M or F in a database model can cause serious harm or confusion to trans or enby individuals (in terms of not necessarily corresponding to appearance or to other data stores). A focus on legible crop growth can lose the value you gain from crop rotations and lead to soil depletion, etc. If there are some hard-coded bad assumptions hidden in your AGI’s model (for example, in the types of features it might look for and use or bias hidden in the training data that is procured), it doesn’t seem like too much of a stretch to say serious harm will be done and that it might be a good idea to use your skills to stop this harm.

It’s also unclear to me how much benefit intelligence or even recursive self-improvement (I still don’t see this paradigm being necessary for making AGI and I can’t think of a system that works like this) can get for you since NP-hard problems may still be computationally limited. I’m mostly swayed by gwern’s argument that even small differences in ability in competitive fields can make large differences in outcomes and given my lack of computability theory knowledge it is plausible that there are good approximation algorithms for many of them. I don’t have a good way to think about computable recursive self-improvement in general. I’m curious about some measure of speed of growth (for example, maybe you could have a program to do as many operations as possible before halting and your program after N recursive self-improvements does f(N) more or something) and/or invariants. If someone could point me to relevant literature I’d appreciate it.

As an end note, this isn’t to say that harm isn’t caused by the current state of AI or job loss due to AI in a non-UBI world. These are important harms to work on, too.

Yes, it’s possible, it will be developed, and it will be dangerous because it will be used in intelligent autonomous weapons. It’s not clear when, but incredibly dangerous autonomous weapons could already be made with current AI capabilities.

The people who put out that video are idiots, and the threat of drones has nothing to do with AI. (Look at the recent drone assassination attempt of Venezuelan President Nicolas Maduro to see why.)
> drone assassination attempt of Venezuelan President Nicolas Maduro > [The Venezuelan military](https://www.usatoday.com/story/news/politics/2018/08/06/venezuela-drone-attack-nicolas-maduro-assassination-attempt-what-happened/913096002/) knocked one of the drones off-course electronically. The second drone crashed into an apartment building The same attack with ten times as many cheap, intelligent, autonomous drones would have stood a much greater chance of succeeding. Why do you think they're idiots?
A truly intelligent drone would know that Maduro is good and target a less leftist leader.
A *truly* intelligent drone would befriend him, and persuade him to re-orient Venezualan industry to paperclip or grey-goo production.
Pointing out that there is a clever new way to do something horrible that no-one has protection against seems like a really bad idea. (And I've been saying that since long before this attack.) Promoting the threat as a video and trying to get lots of press makes that problem significantly worse. Yes, much better AI would likely make these attacks more effective. But as I noted, "the threat of drones has nothing to do with AI." The radio-controlled drones they used were fairly effective, more of them would have been moreeffective, i.e. worse, and using a bunch of drones with pre-programmed courses instead of radio would be about as effective as well. Swarming and/or remote control bombs are a bad thing. You don't need AI for this to be a threat, and strong AI is threatening in much more problematic ways. For example, see the recent RAND report on what AI detection would do to US/Russia nuclear deterrence. (There are many other much more worrying scenarios, but as I noted above, pointing them out is pretty stupid, especially as compared to directly working to mitigate those threats.) Edit: To clarify, I have a huge amount of respect for most of the people involved with FLI, and I've met a few of them. They are really impressive people, and many have done amazing work. In addition, I'm sure all of the people who I'd bet you know there are wonderful people. I still think this video was a very bad decision.
It seems like a very obvious development to anyone who knows current AI capabilities, so I doubt making a video about it is going to have any impact on when and whether it happens.
I would disagree, having worked with people in the defense community.