r/SneerClub archives
newest
bestest
longest
14

Hi, I visit this forum occassionally for good cringe but only do so only from time to time because it’s hard to handle elephant doses and so much is wrapped in ginormous piles of “iamverysmart” material.

So wondering about those AI cultists predicting or even hoping for a new age when dumb apes known as humans won’t be at the steering wheel anymore and glorious enlightened robot will tell us all what to do while obsolete meatbags will just listen and thrive got me thinking…

One thing is bothering me. Imagine a situation. An AI bro thinks he got himself a pretty smart computer, feels confident, approaches the public and says:

“Guys I did it! I have a perfect plan, all we need to do is listen to everything this excellent computer says and revamp our entire society according to these exact, strict specifications!”

“What if we don’t like it?”

“Huh? Why?”

“We’re supposed to rearrange everything according to your vision because a computer told you so?”

“Yeah, isn’t it great? Let’s replace politics with AI, no more arguing over unnecessary stuff, we’re doing what the robot says from now on, so are you with me?”

“Haha fuck no!”

So…how does an AI bro recover from this? What is the backup plan? The public can just laugh in their face. How will you save the entire doomed generations from basilisk damnation if all sides of the political spectrum will laugh their asses off? Surely some dude wrote 25 pages of verbose passive aggresive bullshit to work around this problem of ignorant plebes not buying the techbro overlordism from the get go, somehow?

Non-sneer answer: if the AI is so great that it can organise human society it will see that coming too.

Sneer answer: it was written by humans, though. So.. unlikely.
Isn't the very point of AI that it figures stuff out on it's own? Otherwise it's just regular software.

So…how does an AI bro recover from this?

They’d probably be working for McKinsey at this point and the government would pay them to hear their advice

I mean OP's description is basically what McKinsey is already supposed to be, according to its clients. Whether they work their magic by feeding the data into an AGI or a future Indiana mayor is immaterial. This seems like a truism of lazy AI futurism in general: people imagine some giant inconceivable leap but it's really a series of gradual steps, and they fail to appreciate how many of those steps we've already taken, in the implications for policy and society if not yet technology.
Yeah true, I just meant to say that if some guy is smart enough to make this computer, they need computational power, at whcih point they get noticed, and McKinsey will swipe them up. Also meant to sneer on Yud who has, I think, never actually programmed anything.
>future Indiana mayor I smell a rat.
Oh wow, politicians talking to experts and maybe sometimes listening to them (like always). That's not the singularity automated future of AI god prescribing our fate I've been sold. I demand a refund!

If you’re talking about yudkowsky, I believe the rough idea is that AGI will rewrite it’s code until it’s superintelligent, then use super-persuasion to get it’s handlers to give it unrestricted internet access and resources, then use super-science to develop nano-bots or whatever it needs to take over the world and enforce the society that it’s decided on, even if the original programmers are telling it not to.

This is considered incredibly likely, even inevitable, unless you give money to MIRI, who will prevent it by publishing mediocre game theory papers.

Except their idea is that they’ll build an ai that would do just that, but it will do what we would have wanted if we were smarter and or better informed. Which is either what we want, in which case it's redundant to specify, or is not at all what we want, i.e. same shit as with the unfriendly AI.
[deleted]
comments like that just increase existential risk

I guess the best way to address this would be to use Bostrom’s ‘taxonomy’ for the different types of superintelligence: Oracle, Genie and Sovereign.

The type that you seem to be describing is an Oracle, in which you ask questions and receive answers. A genie is more active, since you would make it requests and it would then implement them. For a sovereign type, this would not be an issue, as it would be free to act without needing human approval.

As for the other two, you can of course choose to reject an oracle’s answer or spurn asking the genie for anything. However, I don’t imagine that your proposed situation would actually happen because it’s unlikely you would invent an AGI and then immediately task it with solving everything. More likely, an Oracle would be asked to solve the Riemann hypothesis, or a Genie would be tasked with curing Dementia, and if they prove they can solve these ultimately much smaller tasks, it’s likely they would be gradually trusted with more and more responsibility until “everything” is rearranged.

> they would be gradually trusted with more and more responsibility until "everything" is rearranged. This would maybe work for technological and scientific applications and boring technocratic policies but there likely would be one or several points of contention/no go zones that could be absolutely radioactive and cause unlimited resistance and tension. So for example say social policies or other political agendas with diverse opinions which direction to take depending on belief system or politics. Imagine you tasked Oracle God to cure dementia, it cured dementia. You asked Oracle god to solve some hard math or engineering problem, it delivered. Now you ask your awe inspiring advisor "hey Oracle God, would you help us solve something more complex and difficult, like poverty?" and Oracle God says: "Sure let's implement a number of social policies and overhaul the economic system, perhaps make some adjustments towards flattening wealth inequality, more sharing, less profit driven incentives, more..", "wait is that S-Socialism?" "It could classify as some version of it maybe, why?" "OMG ITS GONE CRAZY SHUT IT DOWN, SHUT IT DOWN, CALL DOWN THE AIRSTRIKE AND KILL IT, KILL IT WITH FIREEEEEEE!". Or "Hey Oracle God we need some extra money to pay for our new programs, can you find some inefficiencies in the system, shut them down and reroute the funding to important projects?", "Sure, those subsidies to religious organisations are pretty useless also....", "We're sorry to inform everyone but it seems our Oracle God has malfunctioned and stopped working, I'm afraid we're back to the drawing board." Or "Gun rights are detrimental for public safety, after countless simulations I've determined that you'll achieve better results without them on average", "Can you run those simulations again?", "Ok....done, yep, same result", "Uhhh, we'll get back to you later".
I think there are two issues with this response (forgive the screwed formatting): 1. it's quite short-termist. Unless you're way, way more optimistic about the development of AGI than the average, I'd be surprised if these current political issues were so contentious in the 30/50/100/200 years that it takes to develop a true superintelligence. Of course, the general point that some political or cultural issue could arise which hinders the implementation of AI suggestions is true. However, I think this is rendered obsolete by the second issue: 2. it's very American-centric. Perhaps the US political climate would prevent an AI from implementing policies that impacted on gun rights or religion or whatever, but there are a huge number of potential groupings - whether another nation state, a corporation or NGO, or just a lone genius inventor - that could develop a superintelligence and use it to gain a massive advantage over every other nation. If there's a slow takeoff, where AI is only human level or slightly above human, then this could take a long time, but even then it would eventually be overcome.
For the record i do not believe we will develop AGI even in a 500 years or maybe even never. It's just fun thought experiment on what would AI freaks do if the public rejected their plan to replace all decisionmaking with "robot said so". The current political issues I mentioned, it was just an easy example to illustrate that in the end humans will not budge on their core values and dislikes even if some perfect AI had a perfect plan that will make things better. In other countries you'll find plenty points of contention and their own unique irreconcilable differences that often cannot be bridged or compromised on and cannot be abandoned unless you literally dominate the other side and beat them into submission. In 200 years there will be other irreconcilable core values that people will disagree on, that is for certain.

See, the coming acausal robot god will not need to persuade the masses. It will amass power regardless of human consent once it is powerful enough to singularitize.

Isn’t one of Yud’s whole points that a sufficiently advanced AI will be so smart that it will just be able to persuade everyone into letting it take control, even the people who think that letting an AI take over is stupid? Of course, how that actually happens gets handwaved away by the magic of “it’s just that smart”, but the point is that this is definitely a very real possibility that we definitely need to invest a lot of money into now to try and avoid (so please donate to MIRI, thanks)

Yeah, he thinks that somehow the AI is going to magically convince people to let them out of the box. He offered to pay people money if they could say no in a simulation (he pretended to be the AI, naturally). Unfortunately he is not still doing this because it would be a really easy way to make money.

they have decided that the AI, being superintelligent, will also be super-persuasive, something that, if you’ve ever tried to persuade someone, you would know doesn’t and can’t exist (superintelligence also can’t exist because intelligence is socially defined and constructed)

An AI person would probably counter that humans are superintelligent from the perspective of animals like apes. A fun experiment for them: let's drop naked einstein into a gorilla enclosure, and see if he can take over ape society using the super-persuasion of his super-intelligence.

[deleted]

Right as other poster noted they would like to sway this inventor with their self-righteous mediocre game theory papers totally not pushing "my worldview is best worldview" agenda.