In my sense of “understanding” it’s actually knowing the content and context of something, being able to actually subject it to analysis and explain it accurately and completely.
This is something that sufficiently large LLMs like ChatGPT can do pretty much as well as non-expert people on a given topic. Sometimes better.
This definition is also very knowledge dependent. You can find a lot of people that would not meet this criteria, especially if the subject they’d have to explain is arbitrary and not up to them.
Can you prove otherwise?
You can ask it to write a poem or a song on some random esoteric topic. You can ask it to play DnD with you. You can instruct it to write something more concisely, or more verbosely. You can tell it to write in specific tone. You can ask follow-up questions and receive answers. This is not something that I would expect of a system fundamentally incapable of any understanding whatsoever.
But let me reverse this question. Can you prove that humans are capable of understanding? What test can you posit that every English-speaking human would pass and every LLM would fail, that would prove that LLMs are not capable of understanding while humans are?
Hey again! First of all, thank you for continuing to engage with me in good faith and for your detailed replies. We may differ in our opinions on the topic but I’m glad that we are able to have a constructive and friendly discussion nonetheless :)
I agree with you that LLMs are bad at providing citations. Similarly they are bad at providing urls, id numbers, titles, and many other things that require high accuracy memorization. I don’t necessarily agree that this is a definite proof of their incapability to understand.
In my view, LLMs are always in an “exam mode”. That is to say, due to the way they are trained, they have to provide answers even if they don’t know them. This is similar to how students act when they are taking an exam - they make up facts not because they’re incapable of understanding the question, but because it’s more beneficial for them to provide a partially wrong answer than no answer at all.
I’m also not taking a definitive position on whether or not LLMs have capability to understand (IMO that’s pure semantics). I am pushing back against the recently widespread idea that they provably don’t. I think LLMs have some tasks that they are very capable at and some that they are not. It’s disingenuous and possibly even dangerous to downplay a powerful technology under a pretense that it doesn’t fit some very narrow and subjective definition of a word.
And this is unfortunately what I often see here, on other lemmy instances, and on reddit - people not only redefining what “understand”, “reason”, or “think” means so that generative AI falls outside of it, but then using this self-proclaimed classification to argue that they aren’t capable of something else entirely. A car doesn’t lose its ability to move if I classify it as a type of chair. A bomb doesn’t stop being dangerous if I redefine what it means to explode.
I don’t think it’s impossible. You can give ChatGPT a true statement, instruct it to lie to you about it, and it will do it. You can then ask it to point out which part of its statement was a lie, and it will do it. You can interrogate it in numerous ways that don’t require exact memorization of niche subjects and it will generally produce an output that, to me, is consistent with the idea that it understands what truth is.
Let me also ask you a counter question: do you think a flat-earther understands the idea of truth? After all, they will blatantly hallucinate incorrect information about the Earth’s shape and related topics. They might even tell you internally inconsistent statements or change their mind upon further questioning. And yet I don’t think this proves that they have no understanding about what truth is, they just don’t recognize some facts as true.