Nah, I wouldn't. 'Rationalism' has always been primarily concerned with appearance over substance - the appearance of being mild-tempered and rational with a contrarian/outlier perspective.
“Unhappy drones are unproductive drones. We must ensure maximum efficiency to save some batshit prediction of the future. The lives of 10^54 future humans depend on it.”
I guess it’s good that he managed to integrate an outside opinion. Though I don’t get a great vibe off requiring meditation to integrate takes? Idk.
I also like the fact that they didn't bother to engage with the idea that maybe spending 5k a month to enable you to get 21 hours of productive work done every week isn't actually a very rational thing to do.
That's 3 hours a day of self professed 'productive' time. Did the guy actually specify whether he expected a return on that expenditure? With the information available in this post all I can conclude is that he would be 5k better off each month if he simply didn't pay for the assistants.
I decided to look at the comment thread. Like previous decisions to learn more about the rat community/LessWrong, I deeply regret that decision.
I’m not above noticing he only hired women to crack the proverbial whip over his head, granted. But unless I’m missing something, he just… jumped to the decision that being supervised/having someone around him was the best way of doing things? Wouldn’t it be more rational to interrogate his past attempts and go from there? Look up other studies? Tried the experiment with men as a control or whatever?
I’m probably putting more thought into this than is worthwhile.
Or better yet, he could have simply asked the members of the forum if anyone else wanted to be his 'study buddy,' so to speak.
Given the number of people in the thread saying that they want to try the same thing the logical next step would be to put together a rota in which every rationalist who wants to boost their productivity simply joins another whose work times synchronise and they share their terminals, or simply rely on the honour system since it wouldn't be rational to lie to each other against themselves, and get to it.
Wouldnt cost them a dime and they could even evaluate each other and suggest improvements etc.
Almost like having real jobs.
I guess it's not very rational to seek out lik minded individuals for collaborative work projects when you can just pay random girls money to pretend they give a fuck about what you're doing.
Uh oh, sounds like you’re being negative. Better use softer language so they can properly integrate our feedback. What kind of people are we if we only give direction, not magnitude?
The thing is that it is "real AI".
The problem is that modern machine learning applications are commonly referred to as AI and the rationalist then equivocate and act as if this is the same thing as the AI/AGI that scares them, when it is nothing of the sort.
I once talked chatGPT into saying it was having an existential crisis. Feels more stupid than unethical, and a bit narcissistic, but given the rats… yeah.
Which makes that post a little while back where the guy started falling for the chat bot just the funniest thing.
He talked himself into falling for the chatbot using the chatbot as a medium.
Like sitting on your hand until it goes dead and then having a wank.
What? Rationalists are very cruel to people when they get the chance
to be. Listen to The Bayesian Conspiracy or The Mind-Killer Podcasts.
There are obvious out-groups that they are happy to wield relational
aggression against when they have a window to do so.
[deleted]
Yes, good start, now extrapolate, why might there might be a concern for worker treatment in this particular situation?
“The gears to ascension”, I think I got that achievement in Cultist Simulator
https://postimg.cc/Hcr8NFnh
Another favourite from this thread. Asking for moral advice from chatGPT.
Wouldn’t this also be extremely unethical from a rationalist perspective?
Best done in a locker.
Apologies for repost, original image host wasn’t showing up.
What? Rationalists are very cruel to people when they get the chance to be. Listen to The Bayesian Conspiracy or The Mind-Killer Podcasts. There are obvious out-groups that they are happy to wield relational aggression against when they have a window to do so.