The local Effective Altruism chapter had a stand at the university hobby fair.

Last time I read their charity guide spam email for student clubs, they were still mostly into the relatively benign end of EA stuff, listing some charities they had deemed most effective by some methodology. My curiosity got the best of me and I went to talk to them. I wanted to find out if they’d started pushing seedier stuff and whether the people at the stand were aware of the dark side of TESCREAL.

They seemed to have gotten into AI risk stuff, which was not surprising. Also, they seemed to be unaware of most of the incidents and critics I referred to, mostly only knowing about the FTX debacle.

They invited me to attend their AI risk discussion event, saying (as TREACLES adjacents always do) that they love hearing criticism and different points of view and so on.

On one hand, EA is not super big here and most of their members and prospectively interested participants are probably not that invested in the movement yet. This could be an opportunity to spread awareness of the dark side of EA and its adjacent movements and maybe prevent some people from falling for the cult stuff.

On the other hand, acting as the spokesman for the opposing case is a big responsibility and the preparation is a lot of work. I’m slightly worried that pushing back at the event might escalate into a public debate or even worse, some kind of Ben Shapiro style affair where I’m DESTROYED with FACTS and LOGIC by some guy with a microphone and a primed audience. Also, dealing with these people is usually just plain exhausting.

So, I’m feeling conflicted and would like some advice from the best possible source: random people on the internet. Do y’all think it’s a good idea to go? Do you think it’s a terrible idea?

  • @evasive_chimpanzee@lemmy.world
    link
    fedilink
    English
    29 months ago

    I think it would probably be good to go to shed some light on what the movement actually is to some people. At the very surface, the whole point is “how do we do the most good?” which is a fair question to ask. For university students still finding their way in the world, I’d say it’s a good thing that they are trying to find the answer. Many of the techy goals of people in that realm seems like cool scifi. It’s only once you dig deeper that you see the true sinister nature of the people in the field.

    They claim that through technology, they will be able to usher in a utopia where people don’t have to work as much. Funny how they don’t lobby for laws that would require technological advancements to benefit workers, not the owners. There’s many examples throughout history, but one of the best is probably the cotton gin. It was created as a labor saving device to helpfully reduce/eliminate slavery, but all it did was make slavery far more profitable. That’s what happened with an inventor trying to do the right thing. Most tech these days is not developed to benefit everyone.

    It’s no accident that the people claiming that AGI is a risk to humanity are also the ones trying hardest to get there. They are just a little scared of AGI because it could truly cause societal upheaval, and those at the top of a society have the most to lose in that situation. It’s self preservation, not benevolence. The power structures of modern society are vital to their continued lives of extravagance. In the end, they all just want to accumulate wealth, not pay any taxes, and try to make themselves feel like a hero for doing it.

    I’d really just say that the people that would be in that room with you all probably do have legitimately noble goals, so it’s important not to treat them as an adversary. You aren’t going to win anyone over if that’s how you approach it. Just do some research, and make sure to focus on the impact of the actions of the EA people, not their stated goals

    • @bitofhopeOP
      link
      English
      59 months ago

      They claim that through technology, they will be able to usher in a utopia where people don’t have to work as much. Funny how they don’t lobby for laws that would require technological advancements to benefit workers, not the owners.

      This is a good point, but I think it’s best to be careful with anything they might perceive at too overtly “political”. It’s one thing to argue why AI doomsday cultism is bad and another to advocate for fully automated luxury communism.

      It’s no accident that the people claiming that AGI is a risk to humanity are also the ones trying hardest to get there. They are just a little scared of AGI because it could truly cause societal upheaval, and those at the top of a society have the most to lose in that situation. It’s self preservation, not benevolence. The power structures of modern society are vital to their continued lives of extravagance. In the end, they all just want to accumulate wealth, not pay any taxes, and try to make themselves feel like a hero for doing it.

      I might be cynical, but this sounds like overselling AGI and not just because I don’t believe we are anywhere close to creating anything I’d consider one.

      I’m not looking to have a debate or take an adversarial position. If I am to go, I’ll focus on making a case for why AI doom is an unrealistic sci-fi scenario, what actual AI risks we should worry about, why some people benefit from the doomer narrative and possibly touch on why Effective Altruism isn’t a wholly benign movement. The point is only to give them the background so they can make their own decisions with healthy skepticism.

      I don’t assume students interested in rationality and charity work to be bad people or anything. Sneering and berating them right in their face would be counterproductive.