>woke: writing fanfiction where an idealized version of you talks the bad guys to death in long rambling psuedo-Randian monologues
HPMOR is cool though.
It's a far less interesting retread of the first book (which was already not that compelling), only helped by Eliezer's writing style. Probably more complicated than it should have been as well—there's a reason Rowling needed seven books to tell her story.
It gets even sillier, as they arent talking about AI as it exists
now, but about AGI. They will not even consider the idea that perhaps
normal AI also needs to be changed because it isnt an (imagined)
x-risk.
(That some of them have decided that AI racism isnt an x-risk, but
imagend crazy SJW radical egalitarian racism (racism being over and all)
is makes it even stranger. And yes hashtag notallEA is like this).
Exactly, they always start talking about AI being a risk and all, and I'm all like "Yes, the indiscriminate use of black-box algorithms in stuff like pol-" and then it turns out they're talking about vengeful-boogeyman-gods thousands of years in the future.
Given that it’s been like two decades, now, and they keep recycling
the same old scary AI nonsense, it won’t be very long until someone
trains a ML model to effectively replace 99% of rationalist blogs and
Twitter. Maybe they’re right, and AI x-risk is real and coming for
them.
I just recently started attending EA meeting, hoping they would
inform me on how to actually take effective action make things better.
Instead it’s just going over philosophy, which would be better learned
by reading some philosophy books rather than meeting up in a group of
nerds. I love philosophy—it’s one of my favorite things—which is why it
pains me so much to see the philosophical content in EA so poorly
addressed through the readings. But what’s worse is that I it doesn’t
focus on how to take effective action. I understand that it was designed
to help people who perhaps had a very limited knowledge of ethics, but
if that’s the case, then stop trying to teach us some sort of impossible
moral calculus and just tell us how to actually do good.
I'd say Aristotlean Virtue Ethics have much more merit than most of the noise that seeps out of EA. EA reminds me of the pedantic flaws of Chidi the moral philosopher from The Good Place, so focused on formalism and semantics that they can't just take a decision that affects the real world.
My position is that we’re definitely at an inflection point where
enough idiots could think AI is the most important challenge facing
humanity right now that AI becomes the most important challenge facing
humanity right now
My position is that enough idiots thinking AI is the most important challenge facing humanity right now is the most important challenge facing humanity right now.
My position is that the position that the most important challenge facing humanity right now is that enough idiots have the position that AI is the most important challenge facing humanity right now is the most important challenge facing humanity right now.
Aw man, for a second there I thought you were the same asshole who got mad at me today in a four month old thread about Peter Singer and bringing that to this thread for no fucking reason, I was super hyped for that
Had another chat with GPT3… I wonder how many credits I have left
with this new delphic oracle…
Q: Is it worth trying to convince Sneer Club that AI is dangerously
close to surpassing human cognition on all fronts?
A: It is difficult to say whether or not it is worth trying to
convince the Sneer Club that AI is dangerously close to surpassing human
cognition on all fronts. On one hand, it may be beneficial to try to
raise awareness about the potential dangers of AI technology and its
rapid development. On the other hand, it is possible that the Sneer Club
will be resistant to any warnings about AI and its dangers, and may
instead choose to view AI as a positive development.
Q: Actually, Sneer Club look at AI from an anti-capitalist and social
justice perspective. They are concerned with how it affects human power
relations, but not with the possibility of autonomous AI surpassing all
human control.
A: In that case, it may not be worth trying to convince the Sneer
Club that AI is dangerously close to surpassing human cognition on all
fronts. Instead, it may be more beneficial to focus on how AI affects
human power relations and social justice issues.
There's a lotta reinforcement learning experiments where the AI learned to lie to human participants in order to win whatever game they're playing. Is that the same as deception?
I once watched an interview with Peter Singer on YouTube where he and
the host agreed that more suffering has been caused by factory farming
than human slavery lol
I once read a blog post where someone suggested that the suffering of the pigs involved was an illusion because I’m not smart enough to realize that pigs are too stupid to have sensory systems that experience pain. And then people gave that man like six billion dollars to save the world writing shitty fanfic (about his doomsday cult).
Anyway, even if you think slavery is a far more egregious wrong than factory farming, because human dignity matters a great deal, it is surely plausible that if you can compare quantities of suffering (which is not necessarily trivial to do), the numbers are going to start looking very strong in favor of livestock having a larger quantity, given how many of them there have been. That doesn’t tell us much by itself other than that there have been a lot of them, and they have been treated poorly. The Just Crunch the Numbers types add in a bit of reasoning that says ethics just is doing the math, but this is a major substantive claim, not a settled issue.
The real issue isn’t reaching that conclusion. Asking the question as if the answer is of any significance or relevance is the mistake. “Who suffered more, the Jews in Germany under Hitler, or the victims of Genghis Khan?” is a question that’s about as important to answer as how many angels can dance on the head of a pin.
Perhaps one wants to make a point that animals suffer. Then make that point. No need to literally equate Black people to animals, because that is not an effective way to make your argument.
Sorry to grave dig your comment, but that's just land animals. If you count sea life, which you should because they also suffer, we kill over a trillion animals every year
There are nine billion broiler chickens raised in factory farms a year, in the USA alone. They are raised in conditions like [this](https://images.ctfassets.net/ww1ie0z745y7/6Epxcn60DfNL6dtFgawllT/e9da807fdb0d3c6e39086f7dc3001c68/factory-farmed-chickens-the-humane-league.jpg).
The statement is an entirely reasonable conclusion to draw, given the sheer number of sentient beings trapped in those conditions for their entire life.
Just recently they [roasted 5 million chickens alive](https://www.theguardian.com/science/2022/apr/28/egg-factory-avian-flu-chickens-culled-workers-fired-iowa) to contain a bird flu outbreak
broke: being concerned about AI being used to obfuscate and jazz up shitty business practices while injecting human bias into “fair” processes
woke: writing fanfiction where an idealized version of you talks the bad guys to death in long rambling psuedo-Randian monologues
bespoke: driving yourself insane in fear of a non-existent thought experiment monster
It gets even sillier, as they arent talking about AI as it exists now, but about AGI. They will not even consider the idea that perhaps normal AI also needs to be changed because it isnt an (imagined) x-risk.
(That some of them have decided that AI racism isnt an x-risk, but imagend crazy SJW radical egalitarian racism (racism being over and all) is makes it even stranger. And yes hashtag notallEA is like this).
Given that it’s been like two decades, now, and they keep recycling the same old scary AI nonsense, it won’t be very long until someone trains a ML model to effectively replace 99% of rationalist blogs and Twitter. Maybe they’re right, and AI x-risk is real and coming for them.
I just recently started attending EA meeting, hoping they would inform me on how to actually take effective action make things better. Instead it’s just going over philosophy, which would be better learned by reading some philosophy books rather than meeting up in a group of nerds. I love philosophy—it’s one of my favorite things—which is why it pains me so much to see the philosophical content in EA so poorly addressed through the readings. But what’s worse is that I it doesn’t focus on how to take effective action. I understand that it was designed to help people who perhaps had a very limited knowledge of ethics, but if that’s the case, then stop trying to teach us some sort of impossible moral calculus and just tell us how to actually do good.
the tweet
So no altruism is more effective than any altruism, I guess.
My position is that we’re definitely at an inflection point where enough idiots could think AI is the most important challenge facing humanity right now that AI becomes the most important challenge facing humanity right now
Had another chat with GPT3… I wonder how many credits I have left with this new delphic oracle…
Q: Is it worth trying to convince Sneer Club that AI is dangerously close to surpassing human cognition on all fronts?
A: It is difficult to say whether or not it is worth trying to convince the Sneer Club that AI is dangerously close to surpassing human cognition on all fronts. On one hand, it may be beneficial to try to raise awareness about the potential dangers of AI technology and its rapid development. On the other hand, it is possible that the Sneer Club will be resistant to any warnings about AI and its dangers, and may instead choose to view AI as a positive development.
Q: Actually, Sneer Club look at AI from an anti-capitalist and social justice perspective. They are concerned with how it affects human power relations, but not with the possibility of autonomous AI surpassing all human control.
A: In that case, it may not be worth trying to convince the Sneer Club that AI is dangerously close to surpassing human cognition on all fronts. Instead, it may be more beneficial to focus on how AI affects human power relations and social justice issues.
[deleted]
I once watched an interview with Peter Singer on YouTube where he and the host agreed that more suffering has been caused by factory farming than human slavery lol
Don’t engage, Random.
Don’t engage.