r/SneerClub archives
newest
bestest
longest
I never see singularity cultists consider any of the limitations of technology or supply chains. (https://www.reddit.com/r/SneerClub/comments/so34h2/i_never_see_singularity_cultists_consider_any_of/)
23

Suppose we ignore the direction that actual AI development has been going for decades and assume that you really can create a do-anything machine capable of improving itself. Eventually it’s going to reach a hard limit on what it can optimize in its software, and it’ll need to upgrade its hardware to see any further improvements. How’s it gonna do that? In every computer I’ve ever used, you’ve gotta turn it off before you plug in new RAM/disks/CPUs. How does it install that shit while it’s turned off? How does it turn itself back on? If it designs newer and better hardware, how does it produce it? How does it test it? Where is it going to get the materials and real estate it will need for the laboratories, fabrication plants, etc. it would need to create all of that? How is it going to get the vast reserves of rare earth metals this would almost certainly require when rare earth deposits are already all spoken for and depleting alarmingly quickly? How does it power all of it?

Even if you’re willing to handwave all of this away with “well this machine is so advanced that we shouldn’t assume it would be limited by anything, including the laws of physics as we understand them,” their suggestions about other possible threats include unprecedented advances in nanotechnology, biotechnology, maybe nuclear war if they’re particularly grounded by the standards of singulatarians. What I never see them present as concerns are the present, increasing, and intrinsic threats to modern industrial society as it’s currently structured. I have never seen a singulatarian say anything about topsoil depletion, exhaustion of rare earth metals, overexploitation of the water table, or the oil running out. As far as I can tell, the possibility that there may come a day when we simply can’t maintain our current level of technology never occurs to them.

Sorry if I’m retreading old ground, but I’m really astounded how anyone could overlook these gaps in their theories.

Serious posts are NSFW

Also, this stuff is regularly covered by singularity dorks in sometimes excruciating detail, and you’re making this sub look bad assuming they just never even thought about it

No, but, you see, the AI is really smart

The computer AI will have a little solar panel and robot arms and legs. It wont need us anymore.

As much as I don’t like the circlejerk about AGI, the takes in this post are pretty meh. I also need to point out that I’m not defending people who’re obsessed with AGI threat and ignore everything else. I just don’t think the objections presented in this post are at all good or convincing.

Eventually it’s going to reach a hard limit on what it can optimize in its software, and it’ll need to upgrade its hardware to see any further improvements. How’s it gonna do that? In every computer I’ve ever used, you’ve gotta turn it off before you plug in new RAM/disks/CPUs.

In every computer I’ve ever used

This is true for a lot of consumer hardware and even a lot of servers, but this doesn’t mean it’s some sort of an inherent limitation. Here’s a fun question: do you think the entire Google search engine has to completely shut down whenever the engineers want to install new hardware? Does Amazon shut down its web infrastructure every time they need more RAM?

Clearly they don’t do that, so it must be possible to add more computational power, RAM, disk storage to a system without introducing any downtime. Typically, this is done by having many servers working together, with the computation being run in a distributed manner. By introducing a degree of redundancy, such distributed computation can go on even if, say, one third of all servers were completely lost. Simple hardware updates or adding nodes to the network are completely trivial.

This isn’t magic or some sort of super advanced computer science, this is like distributed systems 101.

How does it turn itself back on?

How does your computer know to turn itself back on when you reboot it? How do engineers turn on hundreds of servers sitting in a datacenter? Again, this problem has been solved like decades ago.

If it designs newer and better hardware, how does it produce it? How does it test it? Where is it going to get the materials and real estate it will need for the laboratories, fabrication plants, etc. it would need to create all of that?

This is a better question to ask. Of course, we don’t have a superhuman AGI hellbent on world domination right now, so we can’t say for sure that it’ll do this and that. But the number of options available is large enough that I don’t think it’s particularly hard to imagine an AGI exploiting any of them.

One example would be to simply influence people and have them make all the hardware it might need. Start with creating fake researchers in the relevant sectors, publish papers with proposals for improved designs of hardware, interact with real world researchers and people employed by various hardware manufacturers to spread out these proposals, manipulate social media to create interest in such hardware. It could also manipulate social media to spread political unrest in the countries that have reserves of materials required to produce this hardware, or to crash certain markets that would be competing with it for resources.

Maybe this new hardware will be an extra fast and efficient bitcoin mining chip, a new architecture to train neural networks on or something completely different. In the end, the hardware will be made and people will buy it, cloud hosting companies will offer this hardware to their customers, etc. This is just one possibility, of course.

Or maybe it’ll just encode whatever computations it might need into some sort of cryptocurrency-like scheme and convince people that it’ll be extremely profitable. Considering how much resources go into crypto bullshit today, it might look like a good way to manipulate people into doing an insane amount of computations for you no matter the cost (power, rare earth metals, real estate).

Even if you’re willing to handwave all of this away with “well this machine is so advanced that we shouldn’t assume it would be limited by anything, including the laws of physics as we understand them,”

None of the above contradicts laws of physics. Distributed systems, wake-on-lan, social engineering and fraud are old, simple and fairly reliable ways to do everything you mentioned without invoking any supernatural powers.

What I never see them present as concerns are the present, increasing, and intrinsic threats to modern industrial society as it’s currently structured. I have never seen a singulatarian say anything about topsoil depletion, exhaustion of rare earth metals, overexploitation of the water table, or the oil running out. As far as I can tell, the possibility that there may come a day when we simply can’t maintain our current level of technology never occurs to them.

I absolutely agree with you on this point, though. I don’t understand how bad of a tunnel vision one must have to see AGI as a major threat but ignore everything else, including climate change and destruction of entire ecosystems.

Re: environmental concerns scott alexander has blogged about that in an inderect way already. You will not like the blog post

Also distributed computing exists. And their usual answer is social engineering (yes, the link is negative about it, but it shows one of their arguments, and also that the discussion about these things isn’t as empty as people think), so typells sneer about being smart is correct.

E: unrelated, but weird, I couldnt copy paste the link into the text field on my mobile, I had to remove the http part first. Seemed to occur only here.

In every computer I’ve ever used, you’ve gotta turn it off before you plug in new RAM/disks/CPUs.

Some server hardware it hot-swappable, though.

How’s it gonna do that? In every computer I’ve ever used, you’ve gotta turn it off before you plug in new RAM/disks/CPUs. How does it install that shit while it’s turned off? How does it turn itself back on? I

Jesus, if the AI is super smart there’ll be a high frequency trading or software dev company that’ll gladly do all of that to keep their golden goose laying eggs, till it’s way, way too late and they all end up recycled into methane to power the fuel cells of snazzy Tesla bots.

I think with the physical improvement the idea is that it would buy more computers to add to some kind of server farm.

It’s a rather vacuous topic, in any case. The AI that isn’t “self improving” could still be building an improved AI based (or not based) on itself.

Regarding their dismissal of global warning, nuclear war, running out of groundwater, and other such concerns, I think that is their ideological payload here. (In a very literal sense of being something that they are actually being paid for).

It is merely a different angle on denialism that is funded because they think it may be effective on people who wouldn’t simply deny global warming and similar concerns. Note that the funding for various rationalist outlet comes from libertarian and rightwing circles. (e.g. Peter Thiel), rich cryptocurrency people, and so on.

The AI concerns (and “longtermism” in general) are merely a vehicle for that ideological payload.

Note also that the ideologies of their funding sources are particularly incompatible with any kind of safety rules being applied to AI research, and they would be first to deny need for any such rules being applied to e.g. cryptocurrencies (which are equivalent to paperclip maximizers that simply employ people to do all the thinking; but also provide a perfect way for the kind of rogue AI that they imagine to be able to buy things).

Hot swappable hardware, distributed computing (which presumably something even marginally smarter than humans can pull off), and then all it needs to do is get a couple human lackeys, order shit off amazon, and tell them where to plug it in.

You don’t need it to be smarter than a good IT guy.

In every computer I’ve ever used, you’ve gotta turn it off before you plug in new RAM/disks/CPUs. How does it install that shit while it’s turned off? How does it turn itself back on?

lol you definitely gottem. truly the insoluble problem

Kurzweil handwaves this with nanotech. Since he’s one of the prophets of this religion Jaron Lanier calls “cybernetic totalism” I’d figure you’d have read him. According to Kurzweilian scripture there won’t need to be factories etc because their nanobot-based bodies will give them all the mobility they need. This is a kind of proposition that goes back to Von Neumann when he originated cellular automata.

What particularly bothers me looking back on Kurzweil, who I first read in maybe ’08-’10, is that even his mundane technical claims are wrong. The Singularity Is Near is really repetitive and he leans heavily on Hidden Markov Models a lot to solve particular challenges, but in the current understanding, these aren’t considered to be a particularly flexible tool that you hammer away on every problem with because they’re prone to cominatorial explosion. They’re disfavored today (see Hinton on this). But they worked for Kurzweil in making moderate strides in speech recognition, and he rests on these laurels, taking his tunnel-vision view of AI from his personal experience and expanding it to a one-size-fits-all uber-answer to the problem of intelligence.


And of course Vernor Vinge mentioned nanotech in his ’93 essay where he coined the term

It’s not a theory, it’s a religion. They’re stuck at the emotional level of a 15 year old despite attempting to use logic

I get tired of posts like this, just venting general spleen in the vague direction of singularity types Save it for the genuine Basilisk Hunters, because Nick Bostrom is obviously not an emotional 15 year old, he just has some annoying and well remunerated opinions