posted on April 18, 2022 07:16 PM by
u/JohnPaulJonesSoda
53
u/JohnPaulJonesSoda23 pointsat 1650309464.000000
A good takedown of the NYT’s recent piece on OpenAI and GPT-3 (here)
and the ways in which the hype around AI is framed to make these
projects seem more compelling and transformative.
Yeah good article, the frustration with being labeled just a sceptic
when they are also trying to bring up stuff like “How do we shift power
so that we see fewer (ideally no) cases of algorithmic oppression?” and
more is very understandable.
It's always annoying in these discussions when half the participants hear "oppressive AI" and think of predictive models used in parole decisions, but the other half are thinking of Skynet.
Yeah. I think a big misconception of AI is that it's anything else except simple automation. AI is not being used to come to conclusions humans couldn't, it's being used to come to the conclusion we want. And when we are already biased, the AI will automate that prejudice.
Really interesting read, both the NYT article and the critique. It
set me to wonder - and I’m just going to throw this out there - what do
you suppose is the chance that some or all of the posts at ACT, and
maybe SSC before that, are really in fact responses by GPT-3 to queries
Scott has submitted to it? (Most likely multiple responses stitched
together to make one”unified” post).
I mean collectively we sneerers agree his posts are, if nothing else,
sneer-worthily prolix. As I considered some of the real responses by
GPT-3 to certain example queries in the NYT article, I got a feeling of
deja vu that I’ve read meandering prose like this before.
I think there were some good points in this article, but I don’t feel
very convinced of the claim that these recent LLMs (or other kinds of
models like DALL-E) are unimpressive. Despite the myriad of
sociopolitical and ecological problems around how these systems come
into existence, they are undeniably remarkable. To claim otherwise seems
like opposition for the sake of opposition. Of course that doesn’t mean
the hype and overblown claims about GPT-3’s capabilities shouldn’t be
criticised.
The title urges us to "resist being impressed" and one of the key questions the article raises is "Why are people so quick to be impressed by the output of large language models (LLMs)?". I think tangent talking about comprehension tests is also unnecessary. At a certain point it's just semantics for its own sake.
If you read rather than very briefly skim the article it’s clear what those quotations mean in context, and what on Earth is tangential about the comprehension tests stuff? “At a certain point it’s just semantics” *the whole point of the discussion of language processing and AI is by definition that it’s a discussion about semantics*
No, I have read the article closely and I agree with many of the article's main claims. For instance:
> Puff pieces that fawn over what Silicon Valley techbros have done, with amassed capital and computing power, are not helping us get any closer to solutions to problems created by the deployment of so-called “AI”. On the contrary, they make it harder by refocusing attention on strawman problems.
This is something that I really believe in. However, when you say:
> whole point of the discussion of language processing and AI is by definition that it’s a discussion about semantics
No I don't think it is. The more interesting question is how systems like GPT3 are developed, e.g. the ecological and sociopolitical impacts of their development within the context of OpenAI and other private institutions. It's not "whether GPT3 truly *understands* the concept of tense". We can argue for forever about what it means to have linguistic comprehension.
A
The material questions are more interesting and important than the semantic questions when it comes to developing a persuasive critique. That is why I find the points about comprehension testing "tangential".
Perhaps I am coming at this from the PoV of "obviously GPT3 doesn't understand language in the same sense that humans do - surely no one thinks it does". But even if people do think this, I still think trying to convince them otherwise is a tangent to the key material questions. It only derails the conversation.
The only way I can read your objection now is that you think she should have written an article on something different, which is - frankly - a bit silly to me.
Yes, perhaps the ecological question is more interesting, but she’s addressing *a different issue* in the first place.
It is, to say the least, absolutely clear that many people do *not* share your view that no one thinks GPT3 understands or has the fundamental beginnings of understanding language, because she is addressing *exactly* the people who think otherwise. How on Earth would changing the subject to your personal preference **fail** to derail the conversation?
A good takedown of the NYT’s recent piece on OpenAI and GPT-3 (here) and the ways in which the hype around AI is framed to make these projects seem more compelling and transformative.
Yeah good article, the frustration with being labeled just a sceptic when they are also trying to bring up stuff like “How do we shift power so that we see fewer (ideally no) cases of algorithmic oppression?” and more is very understandable.
When Alan Turing devised the Turing Test, he didn’t account for how the human tester might be a rube
Really interesting read, both the NYT article and the critique. It set me to wonder - and I’m just going to throw this out there - what do you suppose is the chance that some or all of the posts at ACT, and maybe SSC before that, are really in fact responses by GPT-3 to queries Scott has submitted to it? (Most likely multiple responses stitched together to make one”unified” post).
I mean collectively we sneerers agree his posts are, if nothing else, sneer-worthily prolix. As I considered some of the real responses by GPT-3 to certain example queries in the NYT article, I got a feeling of deja vu that I’ve read meandering prose like this before.
I think there were some good points in this article, but I don’t feel very convinced of the claim that these recent LLMs (or other kinds of models like DALL-E) are unimpressive. Despite the myriad of sociopolitical and ecological problems around how these systems come into existence, they are undeniably remarkable. To claim otherwise seems like opposition for the sake of opposition. Of course that doesn’t mean the hype and overblown claims about GPT-3’s capabilities shouldn’t be criticised.