r/SneerClub archives
newest
bestest
longest
18

I’m fully aware that the ‘Rationalist’ movement has done nothing whatsoever towards reducing the existential risk of AI, and pretty much just sits around talking about the problem and ostracising people who might actually be able to contribute.

But besides that, do you guys think the basic idea - that there’s a significant risk that an AI will be created that will murder humanity, or worse, and therefore we should dedicate substantial resources towards stopping that - is valid?

IMHO they vastly misunderstand the scope and risks of the problem. I’m not at all convinced that “AGI” or whatever you want to call it is anywhere on the horizon, and have a hunch that there may be deeper theoretical barriers to it beyond processing power and algorithmic sophistication. It doesn’t help that AI/automation panic is cyclical (about ever 30 years), and the hand-wringing over AGI is part of the broader, more normie cyclical concerns over the robots taking the jobs (which is misunderstood in the same way).

Risks: uses of AI for corporate Orwellianism is a much bigger concern than a species extermination scenario from Stellaris because the former is actually happening right now. I don’t see the latter as a real concern at all given my prior comments about scope. AI has way more potential for quotidian evils and the enhancement of corporate and state power, from which the wealthy will be largely exempt.

Basically, I see Yud et al. as a self-important meme subculture. And it’s not like the “work” they’re doing is important or unique, plenty of more intelligent, better informed people are sounding the bells about things like facial recognition technology.

I honestly see their obsession with AI while ignoring things like global warming as equivalent to someone ignoring the truck speeding towards them because they have a 20% chance of saving 10 people, which is more pressing than the 100% chance you are going to be killed by the truck
I don't follow them closely so I don't know what they've actually opined on global warming. That said, if they think murderous AI is a feasible threat within their lifetimes, they very well could believe technology is progressing fast enough to effectively "solve" issues like climate change, or at least insulate white, western people like them from its worst effects. Which of course, would be incredibly naive and oversimple - so, dovetailing nicely with their views on AI.
This is what many of the futurists like Kurzweil and Bostrom believe, IIRC. Bostrom is a fan of Geoengineering Ex Machina and Kurzwel thinks climate change is "no problem".
Indeed..
>AI has way more potential for quotidian evils and the enhancement of corporate and state power, from which the wealthy will be largely exempt. I wonder what our ostensible AI experts Elon Musk and Bill Gates have to say about that.

There is little reason to take it seriously. You are severely underestimating how many moving parts the cultists’ arguments have, each of which is highly speculative. To name a few:

  • It has to be possible for AGI to be unsafe, but also possible to be safe.
  • Humans have to be able to make AGI.
  • Such human-made AGI has to be at least somewhat likely to be unsafe.
  • Humans have to be able to do things that make AGI more likely to be safe.
  • But it should not be too easy, because if it were easy then you wouldn’t need to spend any significant resources on it.
  • Safety-promoting activities have to be possible now, not 500+ years into the future.

And this is all while assuming their scifi-inspired notions of AGI and safety even make sense.

Edit: If you read their writings, they only try to argue that AGI can be unsafe and that making it safe will be hard. Every other chain in the reasoning they just assume to be true. If you go along with that approach, AI Safety can seem like a much better sell than it really is.

That’s a good point
>It has to be possible for AGI to be unsafe, but also possible to be safe. > >Humans have to be able to make AGI. > >Such human-made AGI has to be at least somewhat likely to be unsafe. > >Humans have to be able to do things that make AGI more likely to be safe. > >But it should not be too easy, because if it were easy then you wouldn't need to spend any significant resources on it. These seem like fair assumptions to make, given that they're generally true of many powerful technologies (nuclear tech for a historical and kinda current example). >Safety-promoting activities have to be possible now, not 500+ years into the future. We don't know when AGI will happen. It might be one major theoretical breakthrough away. Better to be safe than sorry and there are plausibly solvable and useful questions in advanced AI research right now.
Yes but if we all die before then by global warming or some equivalent..

If you’re interested in this, you should read Superintelligence - The Idea That Eats Smart People.

In my opinion, it does a good job of debunking the MIRI crowd’s premises, and providing a slew of simple refutations.

But AI is in fact a danger in the sense that it is already being used, by humans, to increase unchecked wealth inequality, and this trend will only get worse unless forcibly stopped, whether by government action, or revolution of some kind. The “means of production” becomes increasingly sophisticated and increasingly inaccessible to all but the most well-funded, which is a runaway effect. It’s not the AIs alone that are the risk, it’s how people use them.

In many ways the concern about the threat of AI is just projection: what if AI behaves as unethically and irresponsibly as many of us do? But the real risk is that AIs will help the selfish and unethical to increase the gap with the rest of humanity, and that systemic effects will conspire to prevent changing this, as the case with e.g. anthropogenic climate change.

I agree.

that there’s a significant risk that an AI will be created that will murder humanity, or worse, and therefore we should dedicate substantial resources towards stopping that - is valid?

Imho one of the big problems with this reasoning is that it shifts focus away from the real problems with AI today. Where mediocre machine learning is used for all kinds of horrible purposes. ‘I’m sorry sir, the machine says you commited disability fraud, and your benefits are now removed. The computer says moops.’ kind of stuff.

Esp considering how many techies are absolutely convinced that unless there is another techie in the room, they are the smartest person there. The rationalist bias towards thinking they are not that biassed, and are rational also doesn’t help.

Now the main point, should we worry about AGI killing us all? If AGI is possible, and if AGI is possible of exponential self improvement, yes. But I’m not convinced of either, and I think big corps/govs with machine learning tools are a bigger problem. (And as somebody else already said, organizations are basically already AGI).

(I love thinking about it in a science fiction concept however).

What do you think of the argument that there’s not much research in the AI area as opposed to say, climate change, and therefore one person is more capable of making a difference by contributing to it?
I'd say this person saying this is wrong. There is a lot of research done into AI. Unless you only mean specifically AGI, and then I fall back to 'not sure agi is possible', and well openai got 1 billion dollars, that isn't a little bit.
Good point
[deleted]
Well the idea is that AI risk is as important as climate change so it should have as much focus
[deleted]
Yeah

Thing is, there’s a lot of variables here.

The basic premises aren’t that unreasonable:

  1. Humans might one day be capable of creating a general AI that’s more intelligent than humans.
  2. That AI would then also be capable of creating an AI that’s more intelligent than itself or be able to modify itself to become more intelligent (which creates a feedback loop of increasingly intelligent AIs).
  3. You can have intelligent beings that don’t align with human values.

All those seem almost obviously true to me and we already have examples of simpler versions of this in action.

  1. We already have “technology” that’s more intelligent than most humans. It’s called “organization” where we combine the insights and skills of multiple humans towards a common goal.
  2. We already have simple software that can improve itself and we also have organizations that have found ways to self-improve.
  3. Again, organizations provide us a good look at “intelligent” things that don’t necessarily align with human values. Alternatively, you can look at other sapient animals, some of which are quite weird from a human perspective.

In my opinion, the reasoning holds up. If you assume that AGI is possible, the rest follows.

The problem with Yudkowsky and other so-called rationalists is that they have an (in my opinion) disproportionate outlook on the urgency of this problem. “Friendly AI” is definitely a thing we should be looking into and having a MIRI-sized organization staffed by competent people is worth existing.

The idea that Friendly AI (or whatever it’s being called right now) is THE MOST IMPORTANT PROBLEM OF OUR TIME and we should all be throwing all of our resources towards them because having a benevolent AI god automatically solves all of our other problems is what I take issue with.

​ 1. We already have "technology" that's more intelligent than most humans. It's called "organization" where we combine the insights and skills of multiple humans towards a common goal. 2. We already have simple software that can improve itself and we also have organizations that have found ways to self-improve. 3. Again, organizations provide us a good look at "intelligent" things that don't necessarily align with human values. Alternatively, you can look at other sapient animals, some of which are quite weird from a human perspective. I can't remember where I first saw it pointed out, but if you look at the problem this way, you can make a good case for the argument that we already live in a world containing a "super-intelligent entity" that's "maximizing for something other than human values". It's called "capitalism", it's a program running in parallel on the brains of millions of people, using them to act on a global scale to maximize profits/shareholder value/GDP over human well-being or even survival. Heck, if someone said that an unfriendly AI could convince a large number of people that it would be a good idea to turn a not-insubstantial percentage of the world's mostly-carbon-derived electrical power into heat in order to perform mathematical calculations that themselves serve no purpose, but that sometimes produce solutions that also have no extrinsic value but that those people have been convinced are valuable in and of themselves, and then to additionally turn increasing amounts of electricity into more heat every time one of those solutions becomes "owned" by someone new, even though we're facing a global catastrophe driven by carbon emissions and excess heat, it'd seem completely ridiculous and unbelievable. But that's pretty much exactly what a cryptocurrency is, and capitalism has convinced enough people that that's a good idea to make Bitcoin a larger consumer of electricity than Switzerland.
I’m strangely convinced by their idea that the low likelihood of an AI actually being produced that greatly alters the course of human history (whether for bad or for good) is made up for by the massive loss or gain inherent here. What do you make of this?
You fell prey to the LW reinvention of Pascal's Wager. If you could ensure that you can make a benevolent AI god before, say, human civilization as we know it goes away, then yeah, the math works out in favor of creating the benevolent AI god. That's a big assumption, however, especially since we have no clue on how to create such a thing, no realistic timeframe for how long it'll take, and no guarantee that any effort we put towards it will (a) work and (b) result in a really bad outcome ("oops, the AI is only mostly benevolent and thinks gorillas have more worth than humans"). The argument that the low likelihood is made up by the massive (potential) loss or gain only works if you already assume whatever efforts you take will end up working. This is the same in Pascal's Wager.
>no guarantee that any effort we put towards it will (a) work and (b) result in a really bad outcome ("oops, the AI is only mostly benevolent and thinks gorillas have more worth than humans"). The whole point of AGI research efforts is to avoid creating those kinds of "benevolent" AI. Obviously it's not certain that working on the problem will result in a better outcome, but usually you get better outcomes when you try than when you don't try. I don't think the comparison to Pascal's Wager is totally correct. There's no reason to believe in gods, much less the specific god that will enact the punishments. Whereas most experts think AGI is likely to happen within the next century and there's a lot of ways to screw that up.
Yeah but we have bigger issues to deal with first
We can walk and chew gum. It's not like the dangers posed by technology are a trivial thing we can pay no attention to at all.
Indeed, but you need to prioritise
So that’s not valid because you also have to consider the inherently unknowable probability that your efforts will either make things worse or have no effect?
[deleted]
Wait, seriously? A group of people obsessed with quantifying everything have never actually assigned actual probabilities, or precise loss/damage counts, to these hypothetical futures?
They obsess with giving *the illusion* that they've quantified everything, which they like to use as a cudgel in bad internet debates. They seldom actually quantify anything, because that would require they justify the minutia (which actually they couldn't do).
LW: You have to give Big Yud money because our calculations say so A sane person: What calculations? LW: Shut up and multiply.
[deleted]
Interesting
[deleted]
Holy shit. So they claim to be rational while making completely unsubstantiated arguments?
[deleted]
What do you think of the ‘neglected, not overcrowded’ argument? I’ve heard that one before.
[deleted]
So they’re incredibly defeatist about the very solvable issue of climate change and bizarrely optimistic about the far harder to solve issues of AI, morality, and transhumanism?
[deleted]
I didn’t consider that, that’s really insightful
The other thing they are bizarrely optimistic about is the possibility of making worthwhile progress on AI, morality and transhumanism. But still, someone should give it an attempt, the world is a big place with many people, and these are fun issues to think about for some people. So why not, I don't begrudge them writing blog posts and thinking about AI, morality and transhumansim, or sometimes doing actual research. The doomsday cult thing is how serious they take themselves: The most important issue in the world, donate all your money and join a group house where you donate all your work for the cause. In exchange, you become one of the most important people in the world. Hey, we even have a special program to recruit VIPs! Yeah, really? Scientology or la rouche offer the same deal. To be fair, our favorite doomsday cultists seem much nicer, nerdier and more fun than vanilla cults, which is why I engage with them (as sneerer, but still).
I think the only thing that differs about them from more blatant cults is they haven’t murdered or abused anyone (yet)
Uh, welcome to rationalism?
Yeah but that’s even more nakedly irrational than is typical for them
Sounds pretty typical to me.
Good point
[deleted]
Yeah I think that’s a good point. A lot of their arguments come from the estimated probabilities given by AI researchers, even though a) they normally shun actual AI researchers b) that’s not a good way of calculating likelihoods
Yeah, I often see this questions phrased as "give a probability between 0 and 100%", which greatly primes people to give estimates above 1%, even though the answer could vary by order of magnitudes.
Indeed

I think the problem with yudkowskys “ai go foom” scenario is the same fundamental problem at the core of rationalism in general: the belief that pure intelligence is superior to empiricism in forming knowledge. See for example his post about choosing BAYES over science, calling people idiots for not beliving in many-worlds, etc.

This is what leads him to believe that an intelligent AI will be superpowerful, as exemplified by the AI-box experiment, in which he claims an AI that is isolated from the internet can talk it’s way out through pure intelligence. But if you want to learn how to be a super persuader, you have to run experiments, ie: actually talk to human beings. Similarly, the machine cannot deduce all the laws of physics from first principles (he once claimed an AI could deduce einsteins graviation from a webcam looking at a piece of grass), it needs to run experiments with the appropriate equipment. Einstein could not have learned the secrets of the universe locked up in solitary confinement his whole life.

This means that in reality, any AGI will be limited by the speed of experiments and the speed of production. The idea that any AGI will become automatically omnipotent is completely overblown. Frankly, I think we could take it down as long as we aren’t idiots with deploying autonomous drones everywhere.

‘an AI could deduce Einstein’s gravitation from a webcam looking at a piece of grass’ ..What
From [here](https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message): > A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the *dominant* hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple.  It might guess it from the first frame, if it saw the statics of a bent blade of grass. One of these days I'll go through in depth just how insane this statement is, but it really exemplifies the overconfidence in raw intelligence out there.
Well I can think of some things off the top of my head: a) Isn’t Bayesian AI pretty much ruled out by actual AI researchers? b) who’s to say it wouldn’t devise some completely different theory of gravity considering it thinks nothing like us, by that logic? c) Hasn’t he just pulled this claim out of thin air with no real evidence?
I have a degree in physics, that paragraph gave me convulsions. You can't deduce shit from a picture of grass, the shape of grass depends on the internal chemistry of the grass, which the computer *doesn't know shit* about, and cannot deduce from first principles. Similarly with this nonsense about "simulating the opponents brain" to beat the AI-Box, you can't know shit about someones brain unless you actually do some futuristic brain scan.
I’m planning to take a degree in physics soon and it..doesn’t impress me either. Eliezer apparently belongs to the ‘medieval scholars debating how many teeth a horse has while refusing to just go get a horse and check’ school of thought.
>(Oh, and every time someone in this world tries to build a really powerful AI, the computing hardware spontaneously melts. This isn't really important to the story, but I need to postulate this in order to have human people sticking around, in the flesh, for seventy years.) Is that a *definite prediction of an AI timeframe*, EY? Tut tut. Very naughty. Surely you know better.
Like, I would define intelligence as your ability to process information. Even if an AI is superintelligent, if it lacks information, it can’t actually do anything with all those ‘IQ points’ it’s given itself. That’s a really good point..

I think the rationalists are basically right that existential risk is very important, and that AGI represents a risk class that we should know more about. I’m skeptical that MIRI is especially effective, and that their framing - an alignment problem with a technical solution, especially one pursued outside of the normal academic/government channels - is right, but I do think they’ve actually done some useful work in bringing the question to a broader consciousness - certainly you see more discussion of this in more “serious” channels than you used to, and that’s a good thing.

Yeah I think it’s right to be worried, but not *as* worried as they are
Honestly, I'd say that Terminator has done more to make people seriously consider an 'AGI' threat than MIRI.

Whether or not the concerns - either specifically or in broad outline - are reasonable, his methods are definitely not going to do anything about it.

Concern about AI ethics is far better directed at things like, “How will authoritarian governments use facial recognition technology?” or the antihuman nature of capitalism than mitigating the risks of acausal robot gods - especially as the problems you might roughly define as climate change are a slow motion version of what “AI murdering humanity” looks like when you realize “AI” is just “humans using technology”.

Indeed..

Very little.

What is telling is that an AI that is already on the way to destroying humanity already exists - it’s called the Youtube Recommendation Algorithm, as it’s literally making nazis. No need for Total Information Tactical Awareness Networks or exsurgent virii, just fucking stupid algorithms and the profit motive creating a fascist tide.

And those manbabies are shitting themselves over fucking Skynet. Jesus Christ.

Edit: Also, I think the only school, IMO, that could create an AGI, biomimetisism has fallen out of favor, replaced by neural net solutions that, while quite unlikely to be made intelligent, are far more easily made into tools.

Indeed..

It is 117% a defense mechanism against having to deal with real problems in one’s own life or out there in the world right now that you don’t need to write speculative fiction about.

A need to feel important

The biggest problem with Yud’s brand of AI alarmism is that it is based on an “intelligence explosion” by recursive self-improvement, a hypothesis that under the hood requires a downright insane assumption–that it is neither the volume of data nor the processing speed that fundamentally limits intelligence, but the algorithm. Yud once claimed that a superintelligence would hypothesize general relativity from a few frames of video of a falling object. This represents a fundamental misunderstanding of how intelligence works.

What do actual AI researchers think of that?

If you haven’t seen Maciej Cegłowski’s talk about this, you are missing out.

And in general, if you’re not following everything he does, you are missing out.

Edit: ninja’d by u/antonvis

Thank you!

I think there are a bunch of issues that are weird even before we get into the practical issue.

Like “What is intelligence in the first place?”, “Can intelligence really be scaled up like they think it can, and won’t it just run into diminishing returns incredibly quickly?” etc.

And that’s before we get into “AGI is not only not technically feasible at this point, but we can’t even begin to think about how to do it.”

Yeah
Adding to this, they pretty much blow past the fundamental issues and just assume AGI will be some kind of 1 trillion IQ Skynet trained in timeless decision theory. It's worth going through some basics on philosophy of AI and philosophy of mind before getting caught up in the hype. https://plato.stanford.edu/entries/artificial-intelligence/ https://plato.stanford.edu/entries/computational-mind/ https://plato.stanford.edu/entries/multiple-realizability/
I dunno, I think the ALife types have created working proofs of concept, unlike literally every other AI school; the reason it isn't built on isn't that it's a workable scheme, it's that other schools are more easily made into tools, more immediately useful.

Nah. Francois Chollet, one of the prime movers behind Keras, a very famous ML library, had this interesting article on the subject: https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec

In particular, there is no such thing as “general” intelligence. On an abstract level, we know this for a fact via the “no free lunch” theorem — stating that no problem-solving algorithm can outperform random chance across all possible problems. If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.

Personally, I find the mathematical argument about there being no free lunch extremely compelling. I think it’s apparent to any honest observer of ML/AI that the algorithms at work are extremely narrow. You can feed an AI the entire corpus of Bach and produce more Bach music, but you can’t feed an AI the experience of a human life, and expect it to produce original music, or do anything but produce music like Bach’s. (That is, for Bach, the ‘training data set’ was being a human, plus Bach could do a bunch of other things like dress and feed himself.) The brittleness of current AI algorithms is enough to convince me that there’s no particularly interesting breakthrough coming by in the near future.

But even if those algorithms became more generalized, the no free lunch theorem guarantees they would not become these super-intelligent god-monster beings that the rationalists like to scare themselves with.

The whole article is well-worth your time, by the way–he goes on to explain that intelligence is highly situational, and so is the successful application of intelligence. Intelligence is not some kind of magical superpower. Believing that is only possible if you’re the kind of dork who was good at school and nothing else.

Interesting

Blather about X-risks in practice tends to mean that folks ignore the real existential risks to civilization - Climate Apocalypse, The Facist Tide, The Bomb, Plagues, Bolides - the last one just got underscored in dramatic fashion, considering we just got buzzed by a citykiller asteroid - in favor of wank like Grey Goo or SHODAN.

Indeed
Many people concerned about X-risks are specifically addressing nuclear bombs and pandemics, actually.
I've yet to see a serious one actually claim themselves as part of that lot... and by plagues, I mean real life shit coming outta the tropics or multi-resistant things, not some hypothetical genetically engineered superbug. We have public health, epidemiologists, and antiproliferation folks. The X-risks subculture is at best, unneeded to such pursuits, at worst, an avenue for hijacking.
> by plagues, I mean real life shit coming outta the tropics or multi-resistant things, not some hypothetical genetically engineered superbug So do most EAs working on pandemic preparedness, as far as I've found. Just Google "80,000 Hours pandemic" for starters, there are plenty of credible voices addressing this issue from an X-risk perspective (and I say this as someone who isn't very gung-ho about X-risk anyway, for the record). Seems like you're pulling a "no true rationalist" here, honestly.
I'm in training to do epidemiology, hopefully infectious disease. I look *dimly* on the prospect of the equivalent of ['whackers'](https://rationalwiki.org/wiki/Whacker) getting underfoot or, chasing stupid shit or otherwise being an active hindrance to an important job like that.
which is not an unreasonable concern, given the same people are doing *precisely* that to actual-AI risk
And literally anything else transhumanism or X-risk related - memetics is a whole field of folks doing this to rhetoric, FFS.

I think it’s possible, but not necessarily likely, and much much farther into the future than they think it will be. Those text-generating algorithms Scott Alexander seems so impressed with are basically just a neat toy at this stage, it’s pretty weird that he acts like it could be remotely compared to real intelligence or sentience. Curing cancer, or something like that, would at this stage of history be a much more realistic and important way to improve the world.

Yeah

I didn’t come up with the following analogy, but: people who see an algorithm do something clever and say “Oh my god, human-level AI is just around the corner!!!” are equivalent to people who see a magician do a trick and say “holy crap, that guy can do real magic!!!”.

In the case of AI; nobody has any idea how to even start making an AGI and anyone who tells you different is lying. It might be 50 years away, it might be 500 years away, it might be literally impossible for some reason we haven’t discovered yet. Nobody fucking knows.

Yeah, it’s all unknown territory

[deleted]

Why? I don’t disagree, I’m just curious as to your reasoning.
[deleted]
It's not entirely a sham. There has been real progress. For instance, in the 80's Hostadter said the central question in AI is "what is the letter A and letter I?" in the sense of recognizing text in strange fonts. Now we have those kinds of intuitive perceptual capabilities (which are the core development powering Alpha Go, also a major advance), but we're realizing that there's more to our intellectual capabilities, like goal-oriented modeling and planning, which are problems no one seems to have even a plausible approach to at this point.
Yeah

NSFW tag?

Also what is your real concern? Is existential dread of climate catastrophe too much or you to handle? If that is the case fine. But why instead make up a future bogey man to haunt the closets of your mind?

Are you personally afraid that you will be killed by the big bad AI? If you have children, is it them that you are concerned for? Is your future grandchildren? Is it the future generations who will be the same as strangers to you?

Or do you believe that some future AI will reincarnate you simply to torture you, a la I Have No Mouth and I Must Scream. If that is the case, all I can say is get over yourself. You, like all of us, are insignificant. The omniscient AI will presumably be unlikely to waste its time with reincarnating you.

There are plenty of actual issues to worry about. Worry about those. If worrying about those upsets you, then don’t worry about them. But don’t go around making bogeymen for yourself. Particularly when the real issues we are facing are likely to degrade our societies to a point where we wont be able to make toasters, much less omnipotent AIs.

Why would I tag this with NSFW? No, I’m worried by his argument that the only moral thing to do with your life is to help AI research because I’m having a bit of an existential crisis about things like that and I’m looking for rebuttals of it so I can ignore his arguments.
> Why would I tag this with NSFW? Cause it's a rule of this forum, and I'm surprised a mod hasn't come after you yet.
My apologies.
Generally, serious or effort posts are given a NSFW tag. Ok let’s break it down. Why is it moral to preserve the human species?
Well, intuitively it’s because I’m human, I care about humans, and I think all life should be preserved when possible, although not necessarily all ‘specific’ life (I’m of the opinion that immortality is pretty unachievable for me or any of my loved ones, mainly because I don’t think a copy of my ‘consciousness’ uploaded into a computer is me at all). I could say that it’s a utilitarian desire to produce the least suffering and the most happiness possible, but that’s exactly the focus of my crisis. I was fundamentally a utilitarian until I realised fairly recently that utilitarianism gives counterintuitive conclusions depending on your interpretation. One of those is if there’s a significant probability of solipsism/one-man simulation, I have a *moral obligation* to act selfishly, and I abhor that. And the set of other awful results include the repugnant conclusion, torture over dust specks, and ‘give money to AI research’ which I don’t like one bit. I hate all those conclusions, and I refuse to believe they’re ‘right’. So I suppose my crisis is about how I can revise my ethical system to one I’m comfortable with, and not one that’s more flawed that I previously realised.
Ok so on this basis for the morality of preserving life are there any other activities which also achieve the preservation of human life? I would argue yes. You could dedicate your time and resources to combating nuclear proliferation. You could contribute to the CDC, who knows what new plagues await us in the melting permafrost frost. I am sure you can think of others as well. There we established at least two other things it is moral to do under yuds rubic. So as always yuds wrong. Better yet if you want to extend your conception of morality beyond the human species that you open up a whole plethora of species which are going extinct daily. There are countless efforts you could make to preserve those species which would be considered moral by most people.
But I still worry about his argument that potentially causing humanity to suffer tremendously or potentially causing humanity to be tremendously happy outweighs just plain killing humanity, although that again runs into the issue of quantifying probabilities and amounts of suffering (like, for example, it’d be pretty much impossible to work out every single scenario of climate change and their respective probabilities and ‘losses’ and weigh that against the calculated scenatios of AI in the same way).
But why would either of those happen? We seem to be back to my original comment? And if fighting AI is moral because it prevents human suffering than that opens up a whole new world of moral possibility. Humans are constantly suffering. In fact, I would argue that it is immoral to divert any resources from addressing existing suffering in order to address a vague potential future suffering?
So you’d argue the greater level of uncertainty involved in future suffering diminishes it as a priority compared to current suffering?
Of course. Particularly when this idea of potential future suffering appears to be largely based on Science Fiction. Further, we already know that there are factors at present that will increase suffering in the future, climate change and water shortage. These are concrete realities which you can take actual steps to help mitigate. It is masturbatory to ignore the current suffering in the world as well as the future suffering which is certain to result from existing factors in order to engage in a STEMlords dumb wetdreams of being dommed by Mother AI.
Indeed

How would anyone ever form an opinion on this without doing a deep technical dive?

MIRI in particular is all about agent-based AI safety, which is irrelevant, but look at other areas.

What do you mean?

People have been trying to build AGI since the 1950s. It’s hard. If you ask people working on AI today, they will generally say that AGI seems far away. There’s some interview with Demis Hassabis (head of DeepMind) where he says he thinks it’s far away, and he’s someone with an incentive to hype the prospects for AGI.

It’s okay, and good, for a handful of academics to study distant, speculative concerns. If you’re not one of those people, it’s not worth worrying about.

Yeah

I actually think AGI is a potential problem worth worrying about. It has potentially huge implications, we’re not sure what the timeline for its creation will actually be, and in light of those facts, there’s comparatively few people working on it. I’m in favor of more research and strategy going into it.

The thing is, that research should be done by actual experts, not by an organization operating under the banner and philosophy of someone who compensates for a lack of academic or professional expertise by being utterly convinced of not just his own intelligence, but the awesome power of that intelligence (I’m talking about Yudkowsky and MIRI here).

Just to give an example, one can’t help but notice that MIRI doesn’t tackle things like deep reinforcement learning (which are at the forefront of advanced AI nowadays) but rather focuses on the very theoretical like game theory and decision theory. And yeah, game theory and decision theory are legit fields of study with actual experts and are quite possibly relevant to developing AGI, but they’ve also been endlessly abused and misused by self-proclaimed “very smart people” e.g. “rationalists”. All this adds up to making me doubt that MIRI is contributing anything useful. It’s a shame that many of the most prominent AI research organizations are contaminated by Yudkowsky’s influence.

Do you agree with Yudkowsky’s idea that we all have a moral imperative to donate money to AI research?
[deleted]
I agree

I originally thought “well I’m neither interested nor super-smart enough to look at your problem, but I expect it’s good that someone’s on the case”

then I found out what LW actually did hoo boy

bit like cryonics really - I originally thought “plausible, long shot, who knows!” then I looked into it and went “holy crap what is this idiot nonsense”

Cryonics never really impressed me tbh

We know that we don’t know what advances are necessary for strong AI; but we are pretty sure that at least some novel ideas are needed, and that relatively strong AI is possible in principle. It appears unlikely that we’ll crack that problem soon.

We don’t know the practical limits of AI (e.g. posed by complexity theory), but we know that some limits exist. Acausal robot god-style AI is probably fundamentally impossible (fantasy, not SF).

There are good reasons to expect that strong AI will be pretty dangerous once possible. We don’t know whether alignment of a specific future architecture will be easy or hard. It is at least plausible that some strong AI architectures will be easier to build than align.

I am doubtful that the rat movement has made, or will make, substantial contributions on this. It is plausible but not certain that meaningful contributions to AI alignment are simply impossible until we understand more about AI.

However, lots of tiny academic subfields exist and deserve to exist. I think it is good that some people are thinking about this, and it is a good idea for some tiny fraction of AI research to be directed at such safety concerns. On the margins, it is good that some people can think about this kind of safety concerns without getting laughed out of the room. I would have expected the rat movement to be damaging rather than conducive to that (“doomsday cult ahoi”), but it seems that this is not the case. So I give +1 to Yud et al for successful PR efforts for a cause that is good on current margins (moving from laughable to tiny subfield is good; moving from small subfield to mainstream would be bad).

Yeah..

an AI will be created that will murder humanity

Humanity is something that we can and will overcome.

Huh?
Lol, transhumanists.

The idea that AI will surpass human intelligence, historically soon, and that the values espoused by superhuman AI (whatever those values may be) will become the values that govern the earth, is completely logical. Given that, it is also logical to prioritize the task of identifying and codifying human-friendly values, with a view to ensuring that the first superhuman AIs follow values that are friendly to us, rather than unfriendly to us. And even now, MIRI, formerly SIAI, is one of the very few places directly tackling this task (e.g. under the rubric of “AI alignment”). So I certainly regard their concerns and their efforts as meritorious and highly important, even though they are now shrouded in a haze of controversy and notoriety.

But I’m an old-school Yudkowsky camp-follower, from the days before the detour into Less Wrong rationalism, and certainly from before today’s epoch of culture war. There was a time when skepticism about Less Wrong revolved around things like opposition to dogmatism about the many worlds interpretation, rather than concern that Less Wrong rationalism is a gateway to reactionary badthink. By /r/SneerClub standards I am undoubtedly a filthy rationalist sympathizer, so, you may wish to regard my opinion as both unrepresentative and unsurprising.

What about the fact that MIRI has accomplished pretty much nothing and GiveWell literally declared that it was more effective to their cause to donate money to other organisations?
As far as I can see, MIRI is the main reason that anyone at all thinks of the problem of human-friendly superhuman AI, as a problem that might be confronted directly, and actually solved in detail and by design. Otherwise, superhuman AI is viewed with fear, with hope, with skepticism, as a problem that can't be solved, as a problem that will solve itself, as a problem that will be solved by application of some vaguely specified principle. But if I want concrete ideas on how to even make a start on actually solving the problem, where can I turn? MIRI's forums, MIRI's publications, works by other people listed in MIRI's literature reviews... and that's it. Especially with respect to the task of identifying and specifying human-friendly values. Of course there is a vast literature on ethics and politics, in which humans debate what their values should be. So you could say we have a lot of existing proposals, for what the values should be. But I can't even think of another existing AI project, whose declared objective is the creation of an ethical general intelligence. And MIRI is being appropriately cautious about picking a particular ethical system apriori, and making that their pole star. They don't trust the ability of human beings to get such a system 100% right, unaided by computation or science. They would like to have a rigorous method of determining what existing human values actually are (through cognitive neuroscience), and a rigorous method of going from there to what an autonomous AI's values should be, if it is to be a benevolent force in a world of human beings. At the rate that things are going, we may simply run out of time in which to solve those problems; in which case we had better hope that whatever rough ethical heuristics the creators of superhuman AI use, will turn out to be enough to avoid bad outcomes. But except for the time constraint, there's no reason not to try to solve the problem definitely and decisively, and I don't see anyone else trying to do that.
You didn’t answer my question
I said something about what I consider their significance to be, and why I take them seriously. I am not aware of any argument from GiveWell that would change my views.
That’s fair.

If you’re familiar with the AI/ML space, you’ll know that we can’t get symbolic reasoning from gradient descent. Without symbolic reasoning, the current generation of AI is just a fancy repackaging of computational statistics.

We are a long, long way off from AGI. The foundations for AGI aren’t even there yet, so there’s no reason to have an existential crisis over it.

What you should worry about is the way that AI/ML is being used to consolidate power. ML is really good at finding correlations and patterns that humans have had a hard time noticing in the past. In a capitalist context, this means that ML can be used to exploit others in novel ways and to accumulate capital rapidly.