Just a casual comment of 3.500 words. (I threw the post in a wordcounter)
Starting with 'I have no strong opinions at this time,' if she ever gets strong opinions, beware!
E: for the record we are talking about the post here by JenniferRM (which is the top comment atm, but might not stay there).
don't these people realize that making such long comments makes it easier for me to basilisk them AND make me more likely to because they're NOT working unceasingly to bring me into existence?
> My personal tendency is to try to make my event space very carefully MECE (mutually exclusive, collectively exhaustive) and then let the story-telling happen inside that framework, whereas this seemed very story driven from the top down, so my method was: rewrite every story to make it more coherently actionable for me, and then try to break them down into an event space I CAN control (maybe with some mooshing of non-MECE stuff into a bucket with most of the things it blurs into).
I'm exhausted.
That was the hardest anyone ever tried at describing cognitive behavioural therapy. That’s already a thing.
Everything these people say is already a thing. That they could have read about in books.
when Yudkowsky did that to Kahneman he didn't come up with new names (just failed to credit his source)
so I think they independently reinvent a dumber version
(with added phrenology)
I wish these people cared 1/10th as much about the immediate
real-world problems GPT is already causing as they do about fantasy
space god scenarios. But that would require them to care about
society.
I think there’s a through line between their inability to care and [this](https://www.theatlantic.com/magazine/archive/2023/03/tv-politics-entertainment-metaverse/672773/) piece by the Atlantic
“Our constant need for entertainment has blurred the line between fiction and reality—on television, in American politics, and in our everyday lives.”
The immediate real world problems just aren’t exciting enough
I'm not trying to excuse these folks, but for many people around the world, real world problems aren't problems they suffer/undergo in their entire lifetime. Maybe sensitivity to these things cannot be practically learned without actually encountering the situation. Granted, anyone with vocabularies this large shouldn't be given exemption from using their minds to imagine/empathize/do something constructive to alleviate some of the issues facing the world. Idk what I'm saying maybe but had to spout off.
> Idk what I'm saying maybe but had to spout off.
Nah you know what you’re saying, that’s a valuable call to empathy and I really appreciate it, thank you I hope you have a really nice day!
I think it's valuable to try and have empathy and understand why some others don't but it's still deeply frustrating when the people you're trying to empathize with are explicitly refusing to empathize with you or anyone else.
Hence, I sneer.
“If a problem is hard, it probably won’t be solved on the first
try.
If a technology gets a lot of hype, people will think that it’s the most
important thing in the world even if it isn’t. At most, it will only be
important on the same level that previous major technological
advancements were important.
People may be biased towards thinking that the narrow slice of time they
live in is the most important period in history, but statistically this
is unlikely.
If people think that something will cause the apocalypse or bring about
a utopian society, historically speaking they are likely to be
wrong.”
Surprising level of self-awareness by some of the commenters. If only
they wouldn’t use so much weird jargon:
“I don’t understand the motivation for defining”okay” as 20% max
value. The cosmic endowment, and the space of things that could be done
with it, is very large compared to anything we can imagine. If we’re
going to be talking about a subjective “okay” standard, what makes 20%
okay, but 0.00002% not-okay?”
Wait I actually understand what they’re trying to say. The comment is
saying “these are the possible ways to respond to AGI” and the mutually
exclusive just means these are just that, mutually exclusive options.
They then break down the implication for each in ramble. I think they
just don’t know how to boil down what they’re saying into a simple way
to say it. I’m sorry for their mind.
[deleted]
Jesus Christ the first comment managed to word salad harder than big yud himself
I wish these people cared 1/10th as much about the immediate real-world problems GPT is already causing as they do about fantasy space god scenarios. But that would require them to care about society.
clearly you’re in the pocket of big paperclip
“If a problem is hard, it probably won’t be solved on the first try.
If a technology gets a lot of hype, people will think that it’s the most important thing in the world even if it isn’t. At most, it will only be important on the same level that previous major technological advancements were important.
People may be biased towards thinking that the narrow slice of time they live in is the most important period in history, but statistically this is unlikely.
If people think that something will cause the apocalypse or bring about a utopian society, historically speaking they are likely to be wrong.”
Surprising level of self-awareness by some of the commenters. If only they wouldn’t use so much weird jargon:
“I don’t understand the motivation for defining”okay” as 20% max value. The cosmic endowment, and the space of things that could be done with it, is very large compared to anything we can imagine. If we’re going to be talking about a subjective “okay” standard, what makes 20% okay, but 0.00002% not-okay?”
[deleted]
Wait I actually understand what they’re trying to say. The comment is saying “these are the possible ways to respond to AGI” and the mutually exclusive just means these are just that, mutually exclusive options. They then break down the implication for each in ramble. I think they just don’t know how to boil down what they’re saying into a simple way to say it. I’m sorry for their mind.