r/SneerClub archives
newest
bestest
longest
Youtube video on "AI safety". Some part of me want to go kill the disinformation there, but I'm not sure there's a way to do that (https://www.reddit.com/r/SneerClub/comments/11scyaq/youtube_video_on_ai_safety_some_part_of_me_want/)
19

https://www.youtube.com/watch?v=q9Figerh89g

I guess there’s nothing to be done, echo chamber, youtube comments which are not a place for debate.

But I don’t like these crap ideas to be shared on a platform such as youtube :(

So, it’s basically just this essay set to animation.

Like many Yudkowsky essays, it is impossible to summarize, largely because he militantly refuses to have a point. But at least it has a function, which is to suggest – just like humans are dangerous to lions and wolves on account of our superior intelligence, the Singularity will be similarly dangerous to humans on account of its superior intelligence.

Well, no. I estimate I am perhaps a million times more intelligent than any lion, and Yudkowsky is at least a million times more intelligent than any human – yet, if we locked the two in the same arena, most people would put their bets on the lion, despite Yudkowsky having at least a trillion-to-one intelligence advantage over the lion.

Not much has changed in the past five million years. One man vs. one lion, man still loses. Now … a hundred men vs. a hundred lions? There’s the real fight. And what’s the difference? Our ability to plan, communicate, and organize on a community scale. All of which, to be sure, are a product of our intelligence. But at the end of the day, we still need someone to communicate and coordinate with.

In some way I feel like Doug Stanhope here. Did “we” invent the nuclear bomb and sequence the genome? Was that you and me, Yud? Were we down in the trenches garroting Krauts at Verdun? I don’t think “we” did any of that. I’ll grant you those things took some amount of intelligence, but they also took millions of man-years of labor spread over the course of many centuries to do all the necessary experimentation and engineering. I don’t think you can really just ignore that.

Yudkowsky famously – hilariously – said that a sufficiently advanced intelligence could derive all of General Relativity from three video frames of a falling apple. Apparently in his world there is no such thing as being limited by insufficient experimental data. It may take us years of supercollider events to detect the Higgs Boson at six sigma of statistical relevance, so the naturally the Singularity can do it a 100th of the time armed with the same data, I suppose.

Intelligence is a necessary, but insufficient criterion for being “dangerous”. There are eight billion of us with almost five millennia of accumulated knowledge under our belts. We own the planet and everything on it and in it. We have a pretty good head start.

(And if you really want to doom out, just consider that the “Orthogonality Thesis” applies to humans too. Even humans aren’t always aligned with human values. In fact, humans are perhaps the most monomaniacal paperclip maximizers imaginable. Still, we haven’t wiped ourselves off the planet yet, and in any event we won’t need to the Singularity’s help to do so).

Intelligence means being able to derive all true facts from any arbitrarily small set of first principles, because Bayes' theorem.
I see what you mean, but talking about putting a man and a lion in an arena sort of defeats the purpose as what you're limiting there is the intelligence aspect, no? One guy with prep time could easily kill a lion. I don't see the point of the man-years of labor argument either, as simulations could occur at much faster speeds, as an AGI would be able to process a man-year much faster than one year. Also, why would you say humans are the most monomaniacal paperclip maximizers imaginable? I would argue quite the other way around.
Put me and a lion in a cage and I'll shoot it with a gun. I'm smart enough to bring a superior weapon to a lion fight, lion isn't smart to know what a weapon is. Now, if you beat me and up and take my gun away than yeah I'll probably lose, but that would be me fighting against a lion and a human alliance, with you (the human) being the brains of the operation. I dunno where I'm going with this, I'm just pretty sure I'd defeat the lion with my intelligence (and a gun). edit: its true that Gun did need to be invented first tho

When I hear or read about the “alignment problem” I always wonder why those people never specify “alignment with whom”.

I mean, it’s not like “humanity” has a giant collective “interest” that is uniform and the same. The interests of white rich kids playing at the AI safety are wildly different to the interest to the 90% of humanity. The interests of Thiel and Musk are wildly different to the interest of 99% of humanity.

But somehow, diverging interests of the working classes and of the owners class never come into play.

I think the alignment problem is more about what instructions we give to the ai vs how the ai understands them, for example we hire a programmer to optimize our code and make it faster, the central alignment problem here could be that the programmer removes entire functionality and thus makes it faster.
the alignment problem is do the AIs goals (or the problem its trying to solve) align with the goals given to it by its deveolopers thats the who. thats the scope of the problem.

[deleted]

Whats this subreddits take on Rob Miles? I used to watch his stuff and found it quite appealing. Its only recently that I discovered the rabbit hole behind AI Alignment.
never heard of him - does he have any links to our very good friends in Berkeley?

If you think this is bad, wait till you hear about Kurzgesagt.

This is something I keep hearing brought up occasionally and I don't rly get it. It seems Kurzgesagt usually puts a lot of research into their videos and are very transparent when they make mistakes or inaccuracies
What is the problem with them?