So it’s reverse psychology hype? “Oh no this product (that I’m
heavily invested in) is so damn good it’ll end civilization as we know
it, definitely do not buy” that kind of thing?
Also see: Elon Musk telling OpenAI “damn the consequences, full speed
ahead,” then getting kicked out of OpenAI, then publishing a letter
warning that OpenAI is dangerous and needs to be stopped/pause, then
announcing his own AI company with the motto of “damn the consequences,
full speed ahead.”
Also, the beginning of the article: > I was in the process of
scaling down my work at Skype when I stumbled upon a series of essays
written by early artificial intelligence researcher Eliezer Yudkowsky,
warning about the inherent dangers of AI. I was instantly
convinced by his arguments
Don’t bury the lede OP! That tweet was in response to the following
tweet from Hinton:
Dishonest CBC headline: “Canada’s AI pioneer Geoffrey Hinton says AI
could wipe out humans. In the meantime, there’s money to be made”. The
second sentence was said by a journalist, not me, but you wouldn’t know
that.
If anyone still had any doubts that Hinton is a hardcore AI doomer,
the equal of any rationalist, then those doubts should be completely
laid to rest at this point.
This sounds like a pretty radical (but necessary and wise) proposal
from Hinton: “put comparable resources into making sure its
safe.”
Especially since 10% at most of current AI spending goes towards
safety, and probably most of that is for making sure AI doesn’t say bad
words and discriminate against loan applicants, so only maybe 1% goes
towards Eliezer Yudowsky style notkilleveryoneism
Really we need double the spending and resources spent on
notkilleveryoneism as is spent on AI capabilities, and if there are
expertise/time bottlenecks for notkilleveryoneism as is likely (its
probably a harder problem than AI capabilities expansion and its much
earlier in its development) then we need to slow down AI capabilities
research regardless. So with those bottlenecks, we need like 5 times the
number of people working on notkilleveryoneism as on capabilities
research and the latter still need to be held back from going full speed
ahead as much as possible.
So it’s reverse psychology hype? “Oh no this product (that I’m heavily invested in) is so damn good it’ll end civilization as we know it, definitely do not buy” that kind of thing?
Also see: Elon Musk telling OpenAI “damn the consequences, full speed ahead,” then getting kicked out of OpenAI, then publishing a letter warning that OpenAI is dangerous and needs to be stopped/pause, then announcing his own AI company with the motto of “damn the consequences, full speed ahead.”
Not just Hinton; this is becoming the standard refrain of rat-adjacent tech billionaires.
Also, the beginning of the article: > I was in the process of scaling down my work at Skype when I stumbled upon a series of essays written by early artificial intelligence researcher Eliezer Yudkowsky, warning about the inherent dangers of AI. I was instantly convinced by his arguments
Don’t bury the lede OP! That tweet was in response to the following tweet from Hinton:
If anyone still had any doubts that Hinton is a hardcore AI doomer, the equal of any rationalist, then those doubts should be completely laid to rest at this point.
What’s up with your weird two day old account that has such a boner for Geoffrey Hinton?
This sounds like a pretty radical (but necessary and wise) proposal from Hinton: “put comparable resources into making sure its safe.”
Especially since 10% at most of current AI spending goes towards safety, and probably most of that is for making sure AI doesn’t say bad words and discriminate against loan applicants, so only maybe 1% goes towards Eliezer Yudowsky style notkilleveryoneism
Really we need double the spending and resources spent on notkilleveryoneism as is spent on AI capabilities, and if there are expertise/time bottlenecks for notkilleveryoneism as is likely (its probably a harder problem than AI capabilities expansion and its much earlier in its development) then we need to slow down AI capabilities research regardless. So with those bottlenecks, we need like 5 times the number of people working on notkilleveryoneism as on capabilities research and the latter still need to be held back from going full speed ahead as much as possible.