If Turing was alive he would say that LLMs are wasting computing power to do something a human should be able to do on their own, and thus we shouldn’t waste time studying them.
Which is what he said about compilers and high level languages (in this instance, high level means like Fortran, not like python)
Fair, I’m mostly just curious what high level languages were around at the time given how early this was in the history of programming. A quick search did not turn up helpful results.
Oh, it probably wasn’t about an existing language, but about some guy studying what would become high level languages. Like studying linkers and symbolic representation of programs
LLM are not the path to go forward to simulate a person, this is a fact. By design they cannot reason, it’s not a matter of advancement, it’s literally how they work as a principle. It’s a statistical trick to generate random texts that look like thought out phrases, no reasoning involved.
If someone tells you they might be the way forward to simulate a human, they are scamming you. No one who actually knows how they work says that unless they are a CEO of a trillion dollar company selling AI.
If Turing was alive he would say that LLMs are wasting computing power to do something a human should be able to do on their own, and thus we shouldn’t waste time studying them.
Which is what he said about compilers and high level languages (in this instance, high level means like Fortran, not like python)
Where did he say that about compilers and high level languages? He died before Fortran was released and probably programmed on punch cards or tape.
I’ll try to find it later, I read he said that in a book from Martin Davis. He didn’t speak about Fortran, I just used it as an analogy
Fair, I’m mostly just curious what high level languages were around at the time given how early this was in the history of programming. A quick search did not turn up helpful results.
Oh, it probably wasn’t about an existing language, but about some guy studying what would become high level languages. Like studying linkers and symbolic representation of programs
Wasn’t his ideal to simulate a brain?
Neural networks don’t simulate a brain, it’s a misconception caused by their name. They have nothing to do with brain neurons
Not what I meant. What I mean is: this could be the path he would go for, since his desire was to make a stimulated person (AI).
LLM are not the path to go forward to simulate a person, this is a fact. By design they cannot reason, it’s not a matter of advancement, it’s literally how they work as a principle. It’s a statistical trick to generate random texts that look like thought out phrases, no reasoning involved.
If someone tells you they might be the way forward to simulate a human, they are scamming you. No one who actually knows how they work says that unless they are a CEO of a trillion dollar company selling AI.