r/SneerClub archives
newest
bestest
longest
I really shouldn’t expect better from Vox, but I’m supremely annoyed that all their AI coverage is about “alignment” and potential world ending catastrophe, rather than the actual real dangers. (https://www.reddit.com/r/SneerClub/comments/10y2clm/i_really_shouldnt_expect_better_from_vox_but_im/)
68

See this recent piece by known EA and (one time?) friend of SBF Kelsey Piper https://www.vox.com/future-perfect/23591534/chatgpt-artificial-intelligence-google-baidu-microsoft-openai

This AI boom poses very real dangers, but none of them are addressed here. There’s no discussion of racial bias, misinformation, perpetuation of social inequities or any real issues.

It’s all alluding to the fantasy of an accidental paper clip maximizer being unleashed on the world.

Journalistic capture, or, “what a16z would like all ‘journalism’ to be from here on out.”

Hey, that memoryholed puff piece on SBF by that "private historian to the rich" was a banger
Got a link by any chance?
>Got a link by any chance? there are quite a few ​ [https://www.vox.com/future-perfect/2022/8/8/23150496/effective-altruism-sam-bankman-fried-dustin-moskovitz-billionaire-philanthropy-crytocurrency](https://www.vox.com/future-perfect/2022/8/8/23150496/effective-altruism-sam-bankman-fried-dustin-moskovitz-billionaire-philanthropy-crytocurrency) https://www.vox.com/search?page=4&q=sam%20bankman%20fried
[Here's the amazing story](https://archive.is/gPMMp) I particularly call your attention to the bit on reading books and [here's an article on its strange disappearance](https://www.businessinsider.com/ftx-investor-sequoia-removes-sam-bankman-fried-profile-2022-11?r=US&IR=T)

I mean of course you shouldn’t expect better from Vox. That feature section, Future Perfect, was specifically bankrolled by effective altruists (and SBF himself if I recall correctly).

Yeah I’m aware, I’m just still annoyed, and at general lack of good critical engagement with the recent deep learning boom in mainstream press.

i promise i’m feeling appropriately reproachful of myself for defending vox but it’s just really not true that all their ai coverage is about alignment at the expense of social inequity and bias, even within their effective altruist vertical

Honestly happy to see this. I check the headlines on vox from time to time and usually click into AI ones and have mostly seen stuff like the Piper piece I linked. I totally missed this so maybe I’m being *a little* unfair.
to your credit, none of these are bylined by kelsey piper, so the criticism rings true of her specifically

FWIW Kelsey’s was my favorite entry of the rationalist horoscope:

December 23 to January 19: Kelsey TUOC. You just care, like, so much! SO MUCH. You use that softness as a weapon, in that anyone who is not as soft as you is dangerous and you will make sure that everyone knows that. Woe to those who hurt you, for you will send your minions to tell them they are not virtuous. Sometimes you believe victims, if you think that adds to your image.

lmao jesus this is dark
Where is this from?

Are there any good discussions on why the “AI alignment problem” falls outside the purview of more conventional computational stats “capabilities” work on eg interpretability, causal inference, overfitting, etc.? Whenever I’ve read about this stuff it’s always seemed to loosely map onto very actively attended to problems in that general sphere (eg, a “misaligned” AI trained on datasets featuring lots of people smiling as a proxy for happiness dosing everyone with facial paralysis neurotoxin), and so I’ve always felt work there to be especially relevant and especially under-discussed.

It doesn't fall outside of standard ML problems for the most part. There was a tweet by some guy at a robotics startup who pointed out it's mostly just basic concepts (especially from RL) dressed up in apocalyptic language. About 90% of the ideas on Lesswrong are just different, panicked ways of describing overfitting.
I should clarify as well – I'm not saying that nothing interesting has been contributed by alignment-type people to ML. DeepMind's work on safely interruptible agents,RL uncertainty, etc. is cool, Chris Olah's circuits investigations into convnets and transformers is interesting, and techniques like RLHF and constitutional AI were directly borne from alignment-oriented orgs (even if these same orgs are now using these techniques to justify large-scale mis-deployment of LLMs). It's more specifically that there's an inverse correlation between someone's stature in LW/MIRI, and how much meaningful safety research they've actually produced. Even agent foundations, nominally MIRI's main focus, is being done better at DeepMind.
It seems like a consistent theme that the useful work products of alignment-adjacent people have little to do with the putative thesis of "ai alignment", which is that there is a plausible risk of AI deliberately destroying humanity and that "AI safety" is somehow different from other kinds of engineering work. This reminds me a bit of Georg Cantor. He was fundamentally motivated in his work by his religious beliefs, and he also created a lot of useful mathematics, but the the math that he created doesn't seem like it has anything to do with Christian theology unless you happen to already be a particular flavor of Christian.
No. Everything in "AI aligment" boils down to conventional engineering practices, as in "we should make machines that only do the things that we want them to do, as opposed to machines that don't". The alignment people get defensive when this is pointed out to them, but they don't have a good response to it. Actually, "AI alignment" does distinguish itself in one notable way: it is entirely about fixing the problems that occur in a kind of machine that doesn't quite exist yet, and whose properties and challenges are therefore not actually known. Incredibly, the alignment folks do not seem concerned that this might be an indication that their scholarship is equivalent to counting angels.

Kelsey Piper is a rationalist so her stuff in particular tends to be less critical of rationalists and adjacent.

Which is so ironic- isn’t their whole bit supposedly about critically evaluating everything?
Everything about them is like this
Tbh I’m pretty disappointed by this turn because I used to love her work on tumblr (the unit of caring).

[deleted]

Because they don't actually give a shit about that stuff and they know exactly how hollow it rings if they say something like that - neither they nor the audience they write for cares, so they just don't bother. They'd be fine with perpetuating shitty parts of society, after all, they aren't suffering for it. The only way they can imagine suffering is in some sort of apocalyptic fever dream, so of course that's the realest problem they want to tackle.

Came here specifically to post this article. I was so hoping it would be about real dangers, but sadly it was not.

I also noticed that her specific concerns were never articulated; she just talked about “alignment” as if her readers have any idea what that means. In this sub, we do know what she means, but it’s strange to see this vague writing in a mainstream outlet.

She’s worried about “catastrophe”!! Good point. I didn’t even notice that since I’m familiar enough with the underlying fantasies. You’re so right though. Who is this article even for, given that?
Another part of the Less Wrong circlejerk, maybe? Adding competition between tech companies to the concerns about "alignment" might count as a novel contribution.

Man is Kelsey just a shitty writer or is she trying so hard not to make her concerns explicit because they sound like a stupid sci fi story that the result is deeply confusing?

There’s no discussion of racial bias, misinformation, perpetuation of social inequities or any real issues.

I’m more concerned with the affects of:

  1. how datasets commonly include outright theft of data, and their outputs essentially are just data laundering the labor of millions of sources. However even in an equitable fantasy where contributions to a dataset are paid for, ultimately the few sellout the many for the gain of the model owner. A no win scenario that races to the bottom and changes the value of all creation and the very nature of authorship.

  2. how we’re enabling anyone to deepfake anything with oneclick software. The privacy, agency, and misinfo implications are staggering.

> how we're enabling anyone to I want to draw your attention to the implicit assumption that we're better off if only special insiders, parties with money to blow, etc. have access rather than everyone having access. Certainly with everyone having access we'd see more garbage but I think it's also reasonable to expect to see less harm from it and more good. Limited access means we'd still pretend that it's not possible, and so give those who do have access more free reign to abuse it.
IMO, I see alot of assumptions and massive leaps in this statement. Whether or not the public gets a version of the software released, there will always be a more powerful version behind closed doors. There are right now, and always will be, special, more powerful versions of software held in private. I dont see less harm arrising from more people having access to something, that just doens’t math. Could there be a NET benefit though? Is much more harm worth it if it’s a net benefit? At the extremes we might be looking at a complete societly behavior shift, online and IRL, if at any moment your entire persona can be cloned and manipulated by anyone. Is there even a “net benefit” that can exist that out weighs that? Im doubtful. One benefit I can see having access do the gutted versions of software is helping to raise phishing attack awareness to the average person (eg: the avatar generator craze that went viral raises awareness that the tech exists). But we’ve had deepfake videos go viral even before they’ve been easy to make. Do we really need everyone to experience a forest fire to know that fire will burn your skin? The potential dangers of deepfakes have been talked about for years, I don’t think we need to put that tech into everyone’s hands with oneclick software and hope theres a net benefit.

Part of the reason is that they have rationalists on staff.

Maybe journalists are worried about the basilisk torturing them for eternity if they don’t report the way it wants.

This isn’t even an article; it’s just her saying that’s she is worried about things without explaining why

Does Kelsey Piper have any journalistic training at all? As far as I can tell, she has an undergrad degree in computational linguistics and seems to have been hired because of her association with EA… and the fact that she’s been blogging since highschool? Which - just - blogging is not journalism. The rationalists I know (and the ones that write on the internet) don’t seem to understand the difference, which makes me wonder if she does.

Well considering who their founders are…

Paper clip maximizer is a real problem. That problem is called capitalism.