So this is basically like looking at Moore’s Law and declaring it’s
inevitable that Intel will have subatomic transistors by (certain year)
regardless of the physics involved, right?
It is literally equivalent to that. Infinite computation requires infinite energy or infinite material resources, and so there is in fact a physical barrier to achieving it.
Actually, human brains are probably very close to those limits, in terms of electricity in vs computation out. There was a recent post on LW dismantling Yud's argument that human brains are inefficient.
Human brains being nowhere near the limits of thermodynamic efficiency is a key part of Yud's foom scenario.
If you accept that human brains are efficient, then the GPU and electricity requirements for a (fictional) "over 9000" ASI are much higher.
Sorry I'm just contributing to the conversation cause it tickled me to see Yud get owned on LW. Didn't mean to sound like a douche
These guys never seem to realize that 100 IQ is the statistically
defined average and was scooting up every year for decades, and they
also never seem to understand that the scale isn’t calibrated for
extreme numbers. The test doesn’t work for extremely high or extremely
low values, and even for it’s calibrated range it doesn’t apply to
organisms that aren’t basically human in cognition and perception.
You can show all the color based symbols you want to a tetrachromatic
creature and not learn anything about it’s intelligence, and you can ask
all the IQ questions you want of a large language machine and it isn’t
going to tell you anything except for what the model was trained on.
Yeah they also dont seem to realize that these ML models already can do things humans cannot l/do worse (and as such they are superhuman already).
His whole argument about evolution not optimizing for intelligence has similar flaws, in that chatgpt (or an imagined self improving upgrade) also will not optimize for ibtelligence, they just assume machines will not get stuck in some weird (local) maximum or not run into either hard to solve problems (so intelligence doesnt explode in some nefarious way) or simply hits a physical/practical limit.
It leads to weird comparisons to evolution like this:
> Meanwhile, we made hundreds of thousands of years of progress going from GPT-2 to GPT-4.
And the whole airplane thing is lol in a way. As it was a quick spurt of innovation im a couple of decades, and now the field is reasonably stagnant. The product lifecycle has matured.
But he acts like airplanes will go faster than light in a couple of years.
The one reply he seems to have gotten being from some weird IQ idiocracy person is odd.
>His whole argument about evolution not optimizing for intelligence has similar flaws
I mean, the list of hidden assumptions in that idea of his is staggering. There's the common implicit assumption that "intelligence" is a single, general trait that evolution can "optimize" for, some misunderstandings about how evolution works ('phenotypes' for behavior are generally multi-gene and massively complex), etc.
> There's the common implicit assumption that "intelligence" is a single, general trait that evolution can "optimize" for
Yep, and it's one that's ridiculous no matter how you look at it.
What is intelligence? Raw arithmetic computation? Analytical reasoning? Memory? Social reasoning? Linguistic manipulation? Spatial / coordination? Etc.
And those are still difficult to quantify even of themselves.
They wanna treat "smart" as the same thing as "tall" and just ignore any issues that come up, mostly because they all took tests that told them they were smart.
You repent. Are you trying to tell me a mosquito is NOT as perfectly evolved as a human!? Are you trying to tell me GOD didn't make the mosquito as well? HIS EYE IS ON THE SPARROW, AND I KNOW HE WATCHES THE MOSQUITO TOO.
And yea, at 4:45 on the sixth day as the hour of rest and network sitcoms approacheth, the Lord kinda threw something together on His way out the door.
Literally every organism that evolves is just as "evolved" as every other one, unless there's a separate origin of life with a different age. It's not an intuitive idea but that doesn't make it any less true.
> Yeah they also dont seem to realize that these ML models already can do things humans cannot l/do worse (and as such they are superhuman already).
Hell, basically *any* computer can do arithmetic operations at a superhuman level. Computers are not, and have never been, simply linearly "getting smarter" until they eventually hit and surpass humans. They are good and bad at different things than we are. Indeed, that's why early computers were useful -- "I made a thing that's you but dumber" is not a compelling sales pitch! As such, "we're going to make a computer with 9000 IQ" is not really a meaningful or interesting claim, even setting aside the issues with "9000 IQ" or just "IQ" on their own.
If you are actually interested in the current research, take a look at
https://github.com/atfortes/LLM-Reasoning-Papers and https://arxiv.org/abs/2303.12712
Edit: I seem to be out of the loop here. Anyone care to explain the downvotes? I'm here to enjoy sensible sneering against the lesswrong crowd, but not dogmatic, anti-scientific sentiments against anything AI
This post doesn't deserve downvotes, that repo is a great resource I didn't know about. The research on reasoning in LLMs is cool indeed, I suppose people are assuming you're an x-risker for pointing it out. I dunno about you, but I don't think advanced reasoning = doom. Most of the tangible risks (social, economic) don't require much reasoning anyway.
This paper speaks to your original point, that models can link concepts across training domains by using chain-of-thought: [Why think step-by-step? Reasoning emerges from the locality of experience](https://arxiv.org/pdf/2304.03843.pdf). It's a synthetic experiment with some neat results, including testing the hypothesis that if the training data don't have local structure necessitating cross-domain reasoning, then chain-of-thought prompting doesn't help.
This exactly, if it can answer questions the answers to which exist in the data that isn't proof of anything beyond the fact that the answer existed in the data.
It's impressive that it can understand the question and select the correct piece of data to answer it, but to my mind that's the only impressive part and even that ability is limited by how recognisable the data is. The fox and the chicken riddle is simply a logic puzzle. I'd wager that if one was to come up with an original riddle, one whose answer does not exist in the data already, it would flounder.
My personal test, aligned with my own field of interest, is asking GPT to perform creative writing tasks. I've been asking GPT to write an original passage in the style of my favourite authors and it quickly becomes clear where GPTs shortcomings are.
Interestingly, asking it to generate text about certain basic things generate the same observations, no matter what the style you ask it to adopt.
A visit to a fast food restaurant, regardless of the author you ask it to imitate, will result in a diatribe concerning the emptiness of consumerism. It has no way of generating different sets of observations aligned with the ideals of the author you are asking it to imitate. No matter what narrative people are trying to push, it cant produce a model of an artist based on their body of work and use that to generate distinct observations based on that. When it comes to writing it can't even convincingly assume a surface level imitation of prose style.
It's little wonder that those in Stem fields are more easily fooled into thinking that LLMs are approaching human levels of intelligence because they are asking questions that, generally, follow straightforward chains of logic.
>Just look at what is happening in chess , best players in history are nowhere near even simple chess bots, let alone best in the world .Best chess players in the world can rarely calculate more than 10 moves in advance while top AI can calculate over 100 next moves easily and keeps on improving every day .
its actually pretty easy to measure in humans
There are tests you can do in under 1 minute that give correlation of .8
People who traffic in bullshit for a living certainly seem to have
very high opinions of the actual capacity of what are essentially very
advanced bullshitting models.
IQ is not a reliable way of measuring intelligence, for starters. Statistically, a 100 IQ is meant to be the average across a population. Of course, many early data scientists abused this in order to get results 'proving' that certain groups of people were naturally 'inferior' or 'superior'. There are a lot of implicit assumptions being made when someone tries to assign a non human entity an IQ, and it doesnt even make any sense for that to be in the thousands
its the most reliable psychological test for measurement of any trait ever constructed
im sorry that doesnt fit inside your culturally and politically correct view of the world but thats just facts and anybody working inside cognitive science will tell you the same
Your both assumptions are purely emotionally based , and how you would like things to be , and all science is clearly pointing in other direction
I'm not sure where your getting this. The test is actually very unreliable and has to constantly be reworked since the results are heavily dependent on who is administering the test. This isn't based on emotion, this is based on meta analysis. I recommend reading the Mismeasure of Man by Stephen J. Gould.
If you looked at any unbiased research you would have known it
Correlation between different IQ tests regardless of who is constructing or admissioning them is higher than that of any psychological test for any trait ever constructed , which means that its very precise at what it tries to measure
The reason people dont like IQ tests is because majority scores lower than they perceive themself to be at (90% of people think they are among top of the 10% smartest people) and because it goes against Western/Christian view of the world that we are all the same and equally capable... and other pc ideas that might sound nice but are not based in reality
So this is basically like looking at Moore’s Law and declaring it’s inevitable that Intel will have subatomic transistors by (certain year) regardless of the physics involved, right?
These guys never seem to realize that 100 IQ is the statistically defined average and was scooting up every year for decades, and they also never seem to understand that the scale isn’t calibrated for extreme numbers. The test doesn’t work for extremely high or extremely low values, and even for it’s calibrated range it doesn’t apply to organisms that aren’t basically human in cognition and perception.
You can show all the color based symbols you want to a tetrachromatic creature and not learn anything about it’s intelligence, and you can ask all the IQ questions you want of a large language machine and it isn’t going to tell you anything except for what the model was trained on.
So the same IQ as 9,000 Substack bloggers added together?
(Origin)
Of course there isn’t - it’s a normed measure. Aghhhh.
Puny mortals. I have an IQ of 69420.
Techbro explaining that he doesn’t know the first thing about psychology or how intelligence is measured in humans
Wait I didn’t know ai models had a race that fit into white supremacy (but it would make sense given the recent layoff waves)
Lvl 2 Markov chain spambot vs Lvl 9000 LLM Robot God. That’s how AI works.
Over 9000?!?!
People who traffic in bullshit for a living certainly seem to have very high opinions of the actual capacity of what are essentially very advanced bullshitting models.
My god, it’s going to be as smart as Alakazam!
Knowledge is not wisdom ya fucking idiot
I also enjoy browsing dank memes on the internet
It’s over 9000!!!
[removed]
Down the memory hole
To the moon