Wow that’s a dumb idea. “Let’s have some people drugged up on
experimental nootropics design the AI’s ethics” sounds like the premise
of an absurdist twist on terminator.
Of course, Eliezer would reject their solution also, because what he
wants is something with the rigor and surety of pure mathematics,
perfectly implemented in computer code, able to handle the full
messiness and complexity of the real world. The problem he wants solved
is well past ill-posed and is into the realm of crank science and
religion.
Right. But if he hadn’t truly bought his own hype and was purely a grifter, I think he could do better at maximizing the grift. Instead, because he bought into it, he has to cope with failing at the impossible challenge he set for himself.
Let's not discount the possibility that this is maximal grift-utilization for Yud, and his personality and general vibes holds him back from siphoning more money.
I wish folks like him could accept that it's ok to be a regular ass person. He could usefully contribute to which direction AI goes if he could let go of the desire to be king of it.
I don’t think he has the expertise to contribute directly to actual ML work. He is pretty good at hyping things and doing PR to the nerd demographic. If he spent the past decade amplifying other people’s work with AI ethics and algorithmic bias and interpretability that would have been useful…
If he had said, ten years ago, "hey, racism is a cognitive bias, and we should probably pay attention to the way that cognitive biases like racism impact algorithmic decisions (especially hiring decisions)" it would have been super helpful. Instead, we get this.
He's a eugenicist like 99% of (not necessarily clinical) narcissists who are into tech. Gotta spread those big brain genes and suppress less ~~pale~~ favored varieties. Ultimately, his sense of self-importance is what's driving his racism and the need to be in control of where AI goes. He should be a hype man. A support role would probably start the process of breaking down some of his illusions, given that the core of them is so ego-driven.
He’s too much of a narcissist, potentially bordering on a psychopath, to ever accept that he’s not special, much less the smartest genius who ever lived destined to save the world.
Does he actually make any money off of being who he is? I mean, I know about MIRI, but are there enough people out there who buy into his bullshit enough to keep him in Cheetos and Mtn Dew?
As somebody who's done it - provable computer code is a huge, laborious pain in the ass, *when you're dealing with totally precise and understood specifications*. Doing it for customer-facing code like a UI - let alone something as underspecified as "ethics" - is a complete pipe dream.
Once heard of a guy who was working on provable code for a train or something for months. Eventually he emerged from his office, and loudly shouted 'I have done it!', 'done what?', 'I have proven that the train does not explode in the first second'.
>Wow that’s a dumb idea. “Let’s have some people drugged up on experimental nootropics design the AI’s ethics” sounds like the premise of an absurdist twist on terminator.
And the AI is hounding their every step sending police after them, destroying a shipment of nootropics, etc.
And in the final climax, the protagonist encounters the AI and it says, "Stop using untested drugs on your most important and least understood biological systems, you *weirdos*."
That's pretty much what mania is like. You're flooded with ideas and your capacity to sort through them is wrecked by the amount and by the uncontrollable excitement turning your brain into a leaf being blown wildly around by a hurricane.
> “Let’s have some people drugged up on experimental nootropics design the AI’s ethics”
Incidentally, that was exactly the strategy that FTX tried to use, but replace "AI's ethics" with "world's financial infrastructure". I don't understand how some people still haven't figured out the implications of that.
It may be a dumb idea but it is a fun idea. A room full of people all on IV drips of various concoctions with their eyes held open a la Clockwork Orange.
how is this guy taken seriously? honestly, I’m getting SBF & FTX
vibes from Yud and OpenAI/Worldcoin.
edit: same with Geoffrey Hinton and Yoshua Bengio peddling the
“godfather” title and speaking to all their regret when it’s clear they
were relevant and now they’re not, with respect to the actual bleeding
edge.
I think the average person does not understand enough about the difference between what Yud is doing and real AI research, and that media does a bad job of communicating those differences.
Those godfathers are pretty vile in a way. Before the hype they said nothing and were silent while POC/women who talked about AI ethics were fired, and now they claim regret and innocence.
The gist: Yud declares his latest sci-fi fixation, human
augmentation, is necessary to save humanity and proposes experimenting
on “suicidal” volunteers.
When asked if he was willing to take those risks himself and he
immediately and predictably demurs.
He’s too old, you see, and his health is too poor (ie, he’s out of
shape but too lazy to do the work to get fit, which he can only
interpret as a mysterious and crippling ailment).
In classic fashion, Yud has made grand pronouncements about What Must
Be Done but won’t do the work. Like a child playing make believe,
talking about actually solving the problem punctures the illusion and
ruins the fun.
To be honest, I wouldn’t be surprised if Eliezer also gave himself stress related health problems from constantly dwelling on his own doomsday scenarios for the past decade.
If the Thiel money has dried up, or is drying up, hopping on the Musk Crazy[~~train~~](https://www.youtube.com/watch?v=Djrl6fu8myo)cybertruck and try to join that neuralink might be the next step.
So what is his angle, here? It’s almost as though his vested interest
in all of this is to shut down all research so that MIRI can be some
sort of gatekeeper to what AIs get built. Gotta get the “friendly AI/
alignment confirmed” stamp of approval. His desperation in trying to
cling to relevance is astounding.
Yes. The endgame of "ai safety" is levers and concentrated control of the world's most scalable, cheapest labor force.
The politics of existential risk in a nutshell.
His angle here is that he wants to keep the cult running. I don't think he wants to be the guy holding the "Safe AI" stamp, because that's work and he doesn't want to do work. He wants to convince people that because there needs to be a guy with the Safe AI stamp they should be his math pets and support him living a lavish lifestyle without having to work.
It's honestly fucking hilarious that they spent over a decade sperging out about fantasy AI systems as The Only Players In The Alignment Game and then the moment humanity inch closer to an actual AI they're immediately made irrelevant.
I genuinely believe that they're terrified of AI because it's going to reveal they've been talking absolutely nonsensical shite and making money on the spurious basis that it was actually research.
That's exactly it. When all the "AI" progress that is happening is "back-end recommendation algorithms get better", it's easy to pretend that your random theorizing is real work, because neither end has any impact the average person can identify. But when the average person can go use ChatGPT, sniffing your own farts is just really obviously not useful work.
It's pretty amazing how the solution to a superintelligence potentially killing everyone is to potentially kill everyone, and the solution to a superintelligence giving corporations unlimited power is to give corporations unlimited power.
What’s the nooptropics being referred to here? I’ve done plenty of
modafinil, which is usually regarded as the king of nooptropics, and all
the moda in the world doesn’t actually make someone smarter. They just
have an easier time focusing on a task for longer periods of time.
NZT, Spice Melange, Sapho Juice, and/or Wyvern formula all seem promising… oh wait you mean real nootropics? Probably Eliezer assumes there is some potential wonder drug that could get tested if only there was a way to cut through the FDA’s red tape.
Why do you think there's an adderall shortage? They're *sacrificing* themselves microdosing LSD and megadosing adderall. They're trying to save the world!
Wow that’s a dumb idea. “Let’s have some people drugged up on experimental nootropics design the AI’s ethics” sounds like the premise of an absurdist twist on terminator.
Of course, Eliezer would reject their solution also, because what he wants is something with the rigor and surety of pure mathematics, perfectly implemented in computer code, able to handle the full messiness and complexity of the real world. The problem he wants solved is well past ill-posed and is into the realm of crank science and religion.
how is this guy taken seriously? honestly, I’m getting SBF & FTX vibes from Yud and OpenAI/Worldcoin.
edit: same with Geoffrey Hinton and Yoshua Bengio peddling the “godfather” title and speaking to all their regret when it’s clear they were relevant and now they’re not, with respect to the actual bleeding edge.
The gist: Yud declares his latest sci-fi fixation, human augmentation, is necessary to save humanity and proposes experimenting on “suicidal” volunteers.
When asked if he was willing to take those risks himself and he immediately and predictably demurs.
He’s too old, you see, and his health is too poor (ie, he’s out of shape but too lazy to do the work to get fit, which he can only interpret as a mysterious and crippling ailment).
In classic fashion, Yud has made grand pronouncements about What Must Be Done but won’t do the work. Like a child playing make believe, talking about actually solving the problem punctures the illusion and ruins the fun.
What if some math pets got thrown into the equation?
So what is his angle, here? It’s almost as though his vested interest in all of this is to shut down all research so that MIRI can be some sort of gatekeeper to what AIs get built. Gotta get the “friendly AI/ alignment confirmed” stamp of approval. His desperation in trying to cling to relevance is astounding.
Academia is by far the worst social knowledge acquisition system, except for all the other ones.
What’s the nooptropics being referred to here? I’ve done plenty of modafinil, which is usually regarded as the king of nooptropics, and all the moda in the world doesn’t actually make someone smarter. They just have an easier time focusing on a task for longer periods of time.
Wow. Just as credulous about the possibility of human augmentation as he is about superintelligence.
Lollll, i love how Yud basically combines all his favorite Asimov and anime tropes into one Twitter thread
Eloser’s constant refrain when actually challenged seems to be “I’m tired/weak so I can’t help like I used to (pretend to) be able to. Woe is me!”
Just pathetic.