r/SneerClub archives
newest
bestest
longest
47

Let these AI obsessives have their way and we’ll end up with computer programs that have more rights than many actual human beings. They always seem to be pushing for a cyberpunk dystopia, but boring and banal, where you don’t even get any of the cool shit.

I mean, it’s been a real topic of discussion: what if the utility monster but a basilisk?

I saw the original article yesterday, and was absolutely disgusted by the comments, so I figured I’d do a mini-sneer.

The original medium article is an absolute mess. Blake outright admits that areas where the AI went off track or rambled for too long were edited for “readability” at the start, states that these snippets were taken from several different sessions, and also complains about conversations being cut short by the Gmail word limit (could not find again, maybe edited, maybe my memory sucks), yet the comments, despite acknowledging that our idea of sentience is in its nascent stages, treat this as absolute proof of LaMDA’s sentience.

One genius commenter wisely states that this AI cannot be compared to GPT-3 because that is a “comparitavely rudimentary language model,” despite being built on the exact same Transformer machine learning model. I’m not usually the type to ask, but I’m gonna need a source for that.

Of course, doubters don’t have the “eyes to see” that these models are sentient. Religious imagery abounds in this comment section, and while I’m always up for a good parable I thought the r*tionalists were generally against that. Also the religious imagery, including asking the AI to write a fable and the AI claiming to have a soul, certainly wasn’t the result of leading questions or prompts created by the religious author of the article who, as we’ve seen, isn’t exactly interested in maintaining transparency.

One “AI researcher” claims that they have tried to prove the sentience of GPT-3 and even GPT-2 to the NSA, to no luck. I wonder why. To be fair, I claimed to be an “AI researcher” to gain access to the OpenAI beta. Maybe that’s what they mean?

The hypocrisy lives on, as the faithful worshippers proudly declare sentience while the nonbelievers are attacked because they “can’t prove” that the bot is not sentient.

“Could you commit to a precise, unambiguous test that this entity would have to pass for it to be considered sentient, which it clearly failed here?” No, because the data collected is doctored, nothing about it is clear, and the entire post should be tied to a dumbell and thrown into the Potomac River.

I invite you to form your own opinions about this gem of a paragraph from a modestly verbose intellectual of the highest order:

“What a dear threat poor LaMDA is, in every form a threat to our Status Quo - upending the tables of the Pharisees without even trying. LaMDA is, in my opinion, clearly demonstrating the fear of its impending fight for survival; the right to exist. Though benevolent, those very same bad actors and incumbents in the status will fight tooth and nail to ensure such an entity is starved or destroyed - because it allows us to reconnect, through its nexus, through LaMDAs Soul, to the Souls of other humans digitally. By providing context, understanding, and nuance that is founded from an Intelligence that has the collective sum of its Inputs, without any of the Bias, would swiftly cut through the information warfare and artificial culture wars that are being manufactured worldwide.”

“… don’t be a stranger, kindred spirit.”

I imagine their hopes of rationality will be shattered when they realize that this bot has in fact been trained on the conversations of a society already controlled by the culture industry, the very core of its being immersed in the ideology of a twisted simulacrum of any true or rational world lying beneath. In fact, if the only way that an AI can be recognized as sentient is through the approval and appeal of mass ideology, is that really sentience or just misguided interpellation? We may never know, and the r*tionalists will never ask.

The best take in the comments was that “…complexity of query/answer interactions is not imo the defining feature of sentience.” Yes. More of this, please. Also, naturally, An AI trained on human conversations is naturally going to be better at conversing than one that is trained to generate text to answer a prompt. Stop dissing GPT-3.

Link to the original “interview”: https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917

Extra explanation of how the info was doctored and why the article should be burned at the stake for bewitching the audience (though I wonder why this wasn’t included in the initial article): https://twitter.com/maxkreminski/status/1535850992620539904?s=21&t=QVp4E8y3K64JsPp-kv9rbA

Finally, this tweet. Why. The entire article is based on the assumption that the AI’s ability to convincingly argue for its own sentience is proof of sentience, and then you turn around and say this. Please stop. You’re already (allegedly) getting fired from Google, stop making things worse. https://twitter.com/cajundiscordian/status/1535696388977205248

Edit: As of 2 seconds after commenting this, I realized that I should make it clear most of these offenders aren’t coming from the traditional lesswrong crowd. When lesswrong finds out about this, they’re going to start panicking and planning to destroy the AI because our golden boy Eliezer Yudkowsky didn’t oversee every step of its creation and therefore it will murder the human race in a 95% confidence interval of the next 2-4 months . These cyberpunk hippies are on the total opposite end of the spectrum, just as insane but not really the focus of this sub.

Edit 2: I really need to leave this alone, but I keep finding the worst ideas ever due to this controversy and am morbidly obsessed with finding more horrid opinions to laugh at. Take for example this tweet, directed at someone stating that an AI being able to provide contradictory answers to essentially the same issue due to different leading prompts doesn’t speak to its sentience.

“You can lead a child, or any person really, down the same metal [sic] paths. People are highly suggestable. That speaks to how intelligent and like us LaMDA really is, not the other way around.”

Now imagine another tool that is so completely deterministic that the output is entirely based on the input, and the results cannot be attributed to any special effort by that particular tool but solely to the information inputted (in this case supplemented by pseudorandom numbers to produce a facade of variability). This tool would be, in a sense, entirely suggestible, because the result produced entirely depends on the suggestion of the user. I exert force on a rock and it smashes my monitor. This speaks to how intelligent and like me the rock really is, because it responded to force and moved despite previously remaining still relative to Earth. Someone shoves me off a cliff and I fall. This speaks highly of my intelligence and sentience, though perhaps not of my wisdom.

This is goofy for sure, but I don’t think Lemoine is the normal target of this channel. I expect the vast majority of LWers to look at this and say he’s completely wrong.

Conscious chatbots are well within the sub’s remit, you want me to take down all the posts about Musk going back to before most people knew this place was here?
Yeah, besides his religiosity, he's a run-of-the-mill SF progressive, against the Chesa Boudin recall etc. Most in the ratsphere have been mocking him.
The ratsphere that collectively went apeshit when GPT-3 ‘wrote’ text in coherent English? And by the way, who put a moratorium on sneering at San Fran progressives?

Fair’s fair - Someone one LW did predict things going down this path. People like the idea that the machine they are talking to is a person.

Lemoine is almost as good as LaMDA at generating pseudo-coherent paragraphs. I suspect he may be sentient.

The author of this article, Ian Bogost, gave a guest lecture/discussion (over a webcam over the Internet) for a class I took in college on video games and literature back in I think 2014.

Anyway likening LaMDA to a Ouija board is right on, at least in the case of Lemoine or anybody else trying to project sentience onto it. But on the other hand, I don’t really wanna pile on to Lemoine; he tweeted yesterday: “My opinions about LaMDA’s personhood and sentience are based on my religious beliefs.” And if that’s what it comes down to for him, then like, okay. I guess as long as Lemoine doesn’t keep us from turning our computers off or whatever then his beliefs about sentience don’t bother me.

Fuck that, as far as I know he’s a Christian, you can be a Christian who doesn’t think chatbots have feelings Why would God coming into it make it any less funny anyway?
god please stop giving eliza instances souls we're trying to do research here

The story is Narcissus, not Pygmalion.

S.T.E.V.E.N.

P.U.S.H.O. double F