Once you’ve trained your large language model on the entire written output of humanity, where do you go? Here’s Ilya Sutskever, ex-OpenAI, admitting to Reuters that they’ve plateaued: [Reuters] The…
I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”
This article, in contrast, is quotes from folks making the next AI generation - saying the same.
Not at all. AI is something that uses rules, not statistical guesswork.
A simple control loop is alreadu basic AI, but the core mechanism of LLMs is not (the parts before and after token association/prediction are). Don’t fall for marketing bullshit of some dumbass silicon valley snake oil vendors.
I think you are confusing AI with AGI.
Not at all. AI is something that uses rules, not statistical guesswork. A simple control loop is alreadu basic AI, but the core mechanism of LLMs is not (the parts before and after token association/prediction are). Don’t fall for marketing bullshit of some dumbass silicon valley snake oil vendors.