r/SneerClub archives
newest
bestest
longest
[Timnit Gebru] How can we let it be known far & wide that there's a religion in Silicon Valley (longtermism/effective altruism & similar) that has convinced itself that the best thing to do for "all of humanity" is to throw as much money as possible to the problem of "AGI"... (https://twitter.com/timnitGebru/status/1520532465474883584)
6

Is there any particular reason this has been massively downvoted? No reports or anything, which is unusual for an unpopular post. And while it isn’t much in the way of content it’s pretty bog standard stuff for /r/SneerClub.

Perhaps it is some sort of botting/brigading system? The comments also get very little attention.

Gebru is cool, not for sneering at –> mark as NSFW.

Good call, just did that

and this is the religion of the billionaires which conveniently makes them feel good about themselves. And its all for the mostly white men, incredibly privileged (and of course don’t let the undersampled majority through the door or make them miserable if they do),… to save all of humanity. I had this exact reflection when Open AI was announced in 2015 & it only seems to be exploding after they’ve surely proven to us they do anything but “AI safety first”?

Now offshoots are raising literal HUNDREDS OF MILLIOINS.

Huh, is there a schism or something? The big rationalist issue is of course the fear that an AGI will be destroy the world, and the need to figure out how to design “friendly” AI. What she’s describing here is almost the exact opposite.

I think the class-based criticism makes more sense if you consider Charles Stross' talk on corporations as "slow AI". Basically, if they're going to be so concerned about what happens when optimization engines get untethered from human flourishing, it's very convenient that those who benefit from the current economic system can focus on the issue in the sci-fi context of AGI rather than acknowledging the harms done by the capitalist system that makes them successful.
> is there a schism or something? I don't know who this is but it seems from a quick search that Gebru seems concerned with normal real world applications of AI. > "Thanks for sending this to me. I unfortunately don't have much time to get into this but this is a very dangerous type of work. The issue with trying to look at someone's face and determining whether they are a criminal or not is not trying to make sure this can be done across races, genders etc. This is not something that should be done period." [tweet](https://twitter.com/timnitGebru/status/1491877894049579032) This seems reasonable. Is DAIR Institute also doing acasual robot god fear-mongering? What's the LW/rationalist connection for this to be a schism?
The fact that this is even considered normal and a real-world application tells that the field of AI is full of idiots who don't know what they are doing. It's just skull measurement with extra steps. You can, of course, create an AI that basically automates gendered and racial bias. Which is what the research actually does indeed do.
I wish she had named and blamed whatever company/org was working on this. *The one among many companies/orgs working on this, who specifically contacted her.
Yeah seems like the simplest explanation is Gebru just not knowing what she's talking about and associating the rationalist movement with AI development generally, when the rationalists and those building AIs are really quite at odds with each other.
> the rationalists and those building AIs are really quite at odds with each other. I think the simplest explanation is that Gebru is criticising the rationalists who have spoken about AI, eg MIRI.
But the "put as much money into producing an AGI as possible" attitude she's criticizing is strongly opposed by MIRI, their whole thing is paranoia that going that route will produce an AI superintelligence that destroys humanity.
I think you're missing that the phrase she used was "throw as much money as possible to the problem of AGI", because these tweets are about the billionaire investors focusing on the absurd MIRI concerns about paperclips. Hilariously, the big money pot was instead awarded to OpenAI spinoff Anthropic, as beth-zerowidthspace mentions below. MIRI created a grifter friendly space, but were too silly themselves to really capitalise on it.
How are they at odds? AI is extremely over-hyped field, and everything is basically over-promised and under-delivered. It's full of lazy thinking and mysticism. Starting to sound familiar?
The rationalist perspective on AI is that researchers are going way too fast and not thinking through the safety risks. So pretty different from your take that AI is overhyped, but also very different from the "we should maximize progress in AI research" attitude Gebru is criticizing and ascribing to rationalist adjacent groups.
The rationalist perspective has always been that there is *no way to slow it down*, they start from there
No, they invent "safety risks" that do not even remotely grasp the fundamentals. GPT-3 is not going to come alive, nor will any currently publicly known models or systems. It's a one run instance of a text prompt. "GPT-3 can you make yourself come alive," doesn't do anything because GPT-3 is not a continually running sentient generalized AI. In fact, if you ran GPT-3 continually, using prompts and the like, using its own input, giving it input from the outside, because it is simply a "model of the human languages" it will become very schitzo very quickly. What makes a conscious sentient human is more than just the "model of the human languages." GPT-3 and other language models actually produce much more fascinating results when you mess with the top-p sampling and temperature, which are [nonsensical though](https://imgur.com/nz9bEe7) and not usually what people show off. What I see is a loosely correlated linked dataset of human relationships. What a "rationalist" sees is the insane imaginings of a demon god AI that is coming to put them into eternal hell. (And as a joke; in my example; the Jews.)
That is what rationalists claim to be worried about. In reality they do not give a damn about racism or any other form of discrimination. You know, about the actual social problems that AI can amplify. Rationalists focus on one singular "problem" that they can't even demonstrate to be an actual problem. They are not opposed to creating AIs. The attitude is exactly the same. There are no ethical problems in AI currently.
It also doesnt help that the not Rationalist, but Rationalist friendly groups are pretty pro racism. As Nick Land said, 'I want more discrimination not less'. Not strange to end up in that place if you think high (human) brain power is the greatest virtue.
Ah yes, I recall their bemoaning that DALL-E 2 is going to be "censored." And of course you're right. They are IQ maximalists and believe in eugenics and total utilitarianism. They aren't concerned about the AI taking over the world and forcing everyone into some strict support structure. They are just concerned the AI will make paperclips. Luckily 1) censorship is not going to be used by open models and for the censor advocates are just doing it for image purpose, and 2) self-optimizing self-building AI systems don't exist and won't exist for a few decades yet, and once we've got that that point AI will be in everything already anyway, and "alignment" will be a matter of saying "hey, fellow sentience," while we pat ourselves on the back for finally figuring intelligence out. Meanwhile we'll come back to these pedestrian models that we've invented now and realize that, all along, they were sentient. Just little sparks of sentience that disappeared as quickly as they came.
AI is not over-hyped, it's going to change a lot of shit. What's over-hyped is MIRI/Al alarmist/alignment idiots who think that AI will take over the world. 99% of them don't even understand fundamental AI concepts. They just read headlines and trigger words and think "OMG IF WE DONT DO soMETHING WE ArE ALL DEaD."
The central tenet of the rationalist idea AGI idea is that AGI is inevitable and there’s a choice between it being utopian and apocalyptic, none of which is in conflict with what they’re saying in the tweet I have no idea what’s going on in this thread here or why people are misunderstanding
I believe these specific tweets are prompted by [Anthropic raising $580 million](https://twitter.com/AnthropicAI/status/1520074475202482180) in funding to work towards safe AGI.