largely based on the notion that LLMs will, with continued scaling, become artificial general intelligence
Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn’t understand the technology were making those claims. Which helped feed the hype.
I 100% agree that we’re going to see an AI market correction. It’s going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.
Journalists have no clue what AI even is. Nearly every article about AI is written by somebody who couldn’t tell you the difference between an LLM and an AGI, and should be dismissed as spam.
I read a lot I guess, and I didn’t understand why they think like this. From what I see, are constant improvements in MANY areas! Language models are getting faster and more efficient. Code is getting better across the board as people use it to improve their own, contributing to the whole of code improvements and project participation and development. I feel like we really are at the beginning of a lot of better things and it’s iterative as it progresses. I feel hopeful
Who said that LLMs were going to become AGI? LLMs as part of an AGI system makes sense but not LLMs alone becoming AGI. Only articles and blog posts from people who didn’t understand the technology were making those claims. Which helped feed the hype.
I 100% agree that we’re going to see an AI market correction. It’s going to take a lot of hard human work to achieve the real value of LLMs. The hype is distracting from the real valuable and interesting work.
OpenAI published a paper about GPT titled “Sparks of AGI”.
I don’t think they really believe it but it’s good to bring in VC money
That is a very VC baiting title. But it’s doesn’t appear from the abstract that they’re claiming that LLMs will develop to the complexity of AGI.
Journalists have no clue what AI even is. Nearly every article about AI is written by somebody who couldn’t tell you the difference between an LLM and an AGI, and should be dismissed as spam.
I read a lot I guess, and I didn’t understand why they think like this. From what I see, are constant improvements in MANY areas! Language models are getting faster and more efficient. Code is getting better across the board as people use it to improve their own, contributing to the whole of code improvements and project participation and development. I feel like we really are at the beginning of a lot of better things and it’s iterative as it progresses. I feel hopeful
The call is coming from inside. Google CEO claims it will be like alien intelligence so we should just trust it to make political decisions for us bro: https://www.computing.co.uk/news/2024/ai/former-google-ceo-eric-schmidt-urges-ai-acceleration-dismisses-climate
Do you have a non paywalled link? And is that quote in relation to LLMs specifically or AI generally?