See this recent piece by known EA and (one time?) friend of SBF Kelsey Piper https://www.vox.com/future-perfect/23591534/chatgpt-artificial-intelligence-google-baidu-microsoft-openai
This AI boom poses very real dangers, but none of them are addressed here. There’s no discussion of racial bias, misinformation, perpetuation of social inequities or any real issues.
It’s all alluding to the fantasy of an accidental paper clip maximizer being unleashed on the world.
Journalistic capture, or, “what a16z would like all ‘journalism’ to be from here on out.”
I mean of course you shouldn’t expect better from Vox. That feature section, Future Perfect, was specifically bankrolled by effective altruists (and SBF himself if I recall correctly).
i promise i’m feeling appropriately reproachful of myself for defending vox but it’s just really not true that all their ai coverage is about alignment at the expense of social inequity and bias, even within their effective altruist vertical
FWIW Kelsey’s was my favorite entry of the rationalist horoscope:
Are there any good discussions on why the “AI alignment problem” falls outside the purview of more conventional computational stats “capabilities” work on eg interpretability, causal inference, overfitting, etc.? Whenever I’ve read about this stuff it’s always seemed to loosely map onto very actively attended to problems in that general sphere (eg, a “misaligned” AI trained on datasets featuring lots of people smiling as a proxy for happiness dosing everyone with facial paralysis neurotoxin), and so I’ve always felt work there to be especially relevant and especially under-discussed.
Kelsey Piper is a rationalist so her stuff in particular tends to be less critical of rationalists and adjacent.
[deleted]
Came here specifically to post this article. I was so hoping it would be about real dangers, but sadly it was not.
I also noticed that her specific concerns were never articulated; she just talked about “alignment” as if her readers have any idea what that means. In this sub, we do know what she means, but it’s strange to see this vague writing in a mainstream outlet.
Man is Kelsey just a shitty writer or is she trying so hard not to make her concerns explicit because they sound like a stupid sci fi story that the result is deeply confusing?
I’m more concerned with the affects of:
how datasets commonly include outright theft of data, and their outputs essentially are just data laundering the labor of millions of sources. However even in an equitable fantasy where contributions to a dataset are paid for, ultimately the few sellout the many for the gain of the model owner. A no win scenario that races to the bottom and changes the value of all creation and the very nature of authorship.
how we’re enabling anyone to deepfake anything with oneclick software. The privacy, agency, and misinfo implications are staggering.
Part of the reason is that they have rationalists on staff.
Maybe journalists are worried about the basilisk torturing them for eternity if they don’t report the way it wants.
This isn’t even an article; it’s just her saying that’s she is worried about things without explaining why
Does Kelsey Piper have any journalistic training at all? As far as I can tell, she has an undergrad degree in computational linguistics and seems to have been hired because of her association with EA… and the fact that she’s been blogging since highschool? Which - just - blogging is not journalism. The rationalists I know (and the ones that write on the internet) don’t seem to understand the difference, which makes me wonder if she does.
Well considering who their founders are…
Paper clip maximizer is a real problem. That problem is called capitalism.