Legs are like cars the way centipedes are like car collectors.
Centipedes are like car collectors and will soon collect all the cars. There will be no cars left for anyone else.
This is my counterintuitive thesis; if you do not understand it, you need to update your priors extra hard and then steelman it until you follow my logic.
In that all 4 can just kind of rise up and start murdering us all on their own one Tuesday here or a month back?
Hard agree, its damn near tautological.
Fair, however, the reasoning is undisagreeable:
Who's to say there isn't a person in our legs? That they could usurp other body parts to remove themselves (fatally?!), and go about reproducing infinitely with no concern for humanity at all. Worse, they could be hostile instead of neutral. What if they made very tiny legs? What then? Huh? Exactly, checkmate.
I wrote this TED talk with no preparation, and everyone around me has been clapping continuously since I hit reply.
Good old "Anything can be broadly compared to the equipment I work with regularly, so everything *is* that equipment" brain.
Some people thought the world was like a wheel, some thought it was like a book...
Computers had a proof that they can do literally every possible computation back when the word "computer" meant "a person mechanically doing calculations on paper", wheels didn't. Sure silently assuming computationalism in philosophy of mind without even considering other options is a stretch, but " hurr derp nerds think everything is a computer because they don't touch grass" is a lazy take.
On the other hand, I'm going for malice (deliberately hiding the assumption of computationalism, and hoping the reader doesn't catch on) while yours only requires incompetence (picking the most complicated device around and saying that it must be the same as a human brain), and I have no idea what's more likely with AI cultists.
[Church-Turing thesis](https://en.m.wikipedia.org/wiki/Church%E2%80%93Turing_thesis) Yeah, I know that the most interesting part can't actually be proven, and that it is possible that someone invents physically realizable supercomputation; it's just unlikely enough for me. I don't see any problem with assuming that, as I'm not aware of any theory of consciousness that would require physically realizable supercomputation. The ones I know either abandon physicalism altogether, or postulate a reason why a human simulated on a sequential computer would be a p-zombie (like [Integrated Information Theory](https://en.wikipedia.org/wiki/Integrated_information_theory)).
Edit: okay, I forgot about [Orch-OR](https://en.m.wikipedia.org/wiki/Orchestrated_objective_reduction).
This is not a fun sneer. It angers me. Greatly.
Or rather, my GPT-X is telling me there is anger to experience. Is there a person in me? We just don't know.
What I do know is that's maybe the most royal We I've ever heard.
How does Mr. Yudkowsky respond when (assuming I'm not near the first) total laypersons want to engage him on these issues? I mean, I don't even understand how any of this stuff is even remotely defendable against a literal 5 year old; the aether itself seems vocal enough to do it, f f s.
Ya, but not near as much as my insisted honorific.
Its not like this dude has been wearing a different dress to the Mirage 4 shows a week for 30 years.
But that's just a trick of Twitter, I can imagine how his pamphlets would have read.
My friend once told me that when she was 5, she asked her dad how
nuclear energy works. He said, when you keep splitting a thing into two
parts, you will reach a point where splitting it one more time will
create a big explosion that destroys everything.
So she went to the garden, grabbed a flower, and started ripping it
into smaller and smaller parts. When she got to point where it was too
small, she split it one more time. Then she started crying and had a
breakdown because she thought she had just destroyed the world.
She understood nuclear fission about as well, as some of these
doomers understand AI (actual AI researchers on the AI doom train
excepted of course).
I’m intrigued that the board game AIs move further to the right as
they get more powerful. How does AlphaZero “represent more outcomes”
than AlphaGo my dude? As far as I’m aware there are still only three
possible outcomes of a Go game
I think he means that AlphaZero is more generic - it can play arbitrary games, rather than just Go. So it's true that it can represent more game states than AlphaGo can.
The plot is kinda nonsensical though because "optimization power", whatever that means, should be more or less the same for alphago, alphazero, and muzero - they all use monte carlo tree search. And it should probably be *higher* for Stockfish than for the others, because Stockfish uses sophisticated heuristics in addition to monte carlo-type planning.
EDIT: It also doesn't make sense to put LLMs on the plot at all; they have no inherent "optimization power". You can use any kind of planning algorithm when you represent the state space with an LLM.
Oh yeah, that makes sense. But yeah Stockfish should probably be higher than the monte carlo neural network type stuff since it's actually doing game tree search, not monte carlo, so it's actually _better_ at optimizing
(i mean maybe it does monte carlo as well, i'm not like an expert on the internal workings of stockfish. but it does consider all moves from each position whereas to my understanding go AIs don't)
"Monte carlo" in this context is just game tree search, but some branches are prioritized more than others if you don't have time to visit them all. The neural network determines the prioritization. I think stockfish does that *too,* but its special sauce is that it also uses hand-coded heuristics to inform the tree search prioritization, whereas the various Alpha- and mu-AI's deliberately avoid doing that, because they're supposed to be proofs of concept for generic AI.
er, right, bad phrasing on my part, but stockfish to my understanding looks at all branches of the game tree (w/ pruning) vs Go AIs which use neural networks to decide which branches to examine since there's way too many possible moves otherwise. (again unless I'm wrong about how stockfish works, but to my understanding this is why stockfish isn't vulnerable to the same kinds of adversarial attacks that were able to defeat KataGo etc)
No idea about MC, bu Stockfish has a shallow NN for scoring individual positions since some time already (still using a tree for global optimization) and is consitently beating Leela (an open source AlphaChess clone).
AgentGPT is not going to go rogue, they need to be hosted on a
computer and are far to large to become a “virus”. The best way to do
something like this would be to infect computers that already have a way
to run LLM’s (like oobabooga).
At that point all that would be required would be administrator
access (hard to get), and just a command line execution… and then they
would need to not notice it running in the background taking up 70% of
their GPU lmao.
But the agi would be supersmart so it would just know how to hack the task manager (or your local equivalent)! Big smart beat little smart
('Just' is doing a enormous amount of work here).
Liron has substantially gone off the deep end. He seems to be
sincere, but MAN his correct interpretations of crypto seem in
retrospect to be a bigger and bigger fluke.
This guy started off "promoting" his own tweets on Twitter to grow an audience, shifted to anti-crypto stuff in order to keep growing it, and has now switched to "AI risk" as a new approach. The sneers are fine with me, but I don't like handing grifters free attention.
He spoke a lot about not having good usecases, and looking at profitability of web3 companies.
It wasn't revolutionary stuff, of course, but it happened to be correct. He's doing a bit of the "I was right about crypto, so you can trust I'm right about how an AGI is going to 'brick the universe' essentially the moment it becomes self-aware because everything I learned about thinking I learned from Eliezer Yudkowski."
(This last bit something he actually told me.)
Succinct
“here’s a more accurate diagram.”
OBVIOUSLY the choo-choo train 🚂 of technical progress is headed toward the DOOM ZONE. Why can’t you deniers just accept that?!?!
Just like a PragerU graph!
But the y-axis needs to say “Marxism.”
Our best and brightest MSPaint users are forecasting a dire future
“[Mr.] Yudkowsky explains minds are basically computers and vice versa and everyone in every discipline understands this fact.”
Kids, don’t wave your hands anywhere near that hard/fast at home!
My friend once told me that when she was 5, she asked her dad how nuclear energy works. He said, when you keep splitting a thing into two parts, you will reach a point where splitting it one more time will create a big explosion that destroys everything.
So she went to the garden, grabbed a flower, and started ripping it into smaller and smaller parts. When she got to point where it was too small, she split it one more time. Then she started crying and had a breakdown because she thought she had just destroyed the world.
She understood nuclear fission about as well, as some of these doomers understand AI (actual AI researchers on the AI doom train excepted of course).
Diagrams that look like shitposts
Hahahahahahahaha How The Fuck Is Rokos Basilisk Real Hahahaha Nerd Just Walk Away From The Screen Like Nerd Press The Off Button
I’m intrigued that the board game AIs move further to the right as they get more powerful. How does AlphaZero “represent more outcomes” than AlphaGo my dude? As far as I’m aware there are still only three possible outcomes of a Go game
AgentGPT is not going to go rogue, they need to be hosted on a computer and are far to large to become a “virus”. The best way to do something like this would be to infect computers that already have a way to run LLM’s (like oobabooga).
At that point all that would be required would be administrator access (hard to get), and just a command line execution… and then they would need to not notice it running in the background taking up 70% of their GPU lmao.
Liron has substantially gone off the deep end. He seems to be sincere, but MAN his correct interpretations of crypto seem in retrospect to be a bigger and bigger fluke.
Shhh nobody tell him about decision transformers, he might not recover from that.
DOOM ZONE
everything that has a beginning has an end.
There are no units on the graph so how can he tell how close we are to the ‘doom zone’ if a sizable ‘doom zone’ exists at all?
Edit: I am not even good at math, and I know useful graphs must have units. Shouldn’t the twitter user know that too?