I’ll give Paul some credit here - Emperor’s New Clothes is actually a
pretty good analogy here!
After all, it’s a cautionary tale about how a group of self-described
experts was deluded into thinking they saw something significant because
it was sold to them as a way of indicating that they were superior to
the rest of society.
What does this even mean? Or are the emperor’s new clothes here a
metaphor for not blindly accepting bigoted outcomes even though we’re
getting them from a really complicated algorithm made by the people who
benefit the most from the status quo rather than getting it from those
people themselves.
"My system that produces outputs based on its inputs produced racist outputs when I input a bunch of racism. *Clearly* this proves that racism is inherently true and good."
This has always been the danger of automated decision-making - it gives people license to say "it's not racist, it's just science! You're not *anti-science*, are you?" because they can pretend that the models don't reflect the biases of the humans who made them, that they were born pure and naked from the primordial essence of the universe like Venus from the sea.
> they can pretend that the models don't reflect the biases of the humans who made them
I disagree with this framing. It's not the biases of the creators, it's the biases of the *data*. In some sense, I think that's worse. Because it means that to create an unbiased system, you have to understand the data, but the whole *point* of the ML system is to understand the data in ways you can't. So the question of how to create an unbiased system is extremely difficult (this, incidentally, is the sort of thing rationalists would be concerned about if they were not fundamentally unserious people).
>It's not the biases of the creators, it's the biases of the data.
It’s also the biases of the creators, although the word isn’t strictly bias, the more comfortable one would be “ideology” or indeed “frame” of the creator. If your data is “all of the internet” there has to be a decision-making chain which gets you to “I can use all of the internet for this” or “if I take the whole internet and tweak it this way and that, that’ll do”, or sub in every little decision about how the data are managed or every statement “we don’t have an ideology, we just want to create [intensely ideologically laden thing]”. Indeed the decision you make to use the word “bias”, with its implication of a natural unbiased equilibrium, itself is laden with and/or loads the interpretation of the data before you or I have any in front of us or a model to jam it into.
Yeah essentially it bakes existing inequalities and biases into the decision-making process in an opaque, unfixable way, potentially. For example a triage algorithm might decide "black men have worse medical outcomes anyway, so they get a lower priority for medical care" thus simply perpetuating an oppresive cycle even with the human reasons stripped away. Which is scary to say the least.
A black box might put sickle cell anemia (a predominantly African trait) patients lower on transfusion priority because of historically worse outcomes. Or systematically assign obese patients (among which Hispanic and Black patients are over represented) lower healthcare priorities. These actions result in even worse outcomes, which strengthens the bias in the algorithm. Spooky stuff, and I’m honestly worried this is already happening on organ recipient lists.
It can also lead to easy feedback loops. Racist gov checks people with an immigration background more more so data gets anti immigration background bias. Which then gets used to justify the ai having a bias because it follows from the data. This already happened in the Netherlands. The software developers whem attacked over this even defended their choices.
>The algos are telling the truth and black men *actually do* have worse health outcomes
They do. But the point of triage isn't to solve this problem, it's to allocate limited medical resources. This is an example of how automated decision-making in a condition of scarcity will generally perpetuate existing inequalities, and the problem is only compounded by the more human factor that the people in power in an unequal system are totally fine with this outcome.
>the more data we plug into automated decision making tools the better they will accurately reflect reality.
And reality is full of systemic inequalities soooooooooooo
EDIT: Just looked at your post history and it turns out you're a *literal eugenicist*, of course you're fine with perpetuating existing inequalities.
I’m sorry you grew up in such a repressed environment and will refrain from using the word “asshole” cheerfully and non-judgementally in this context again
Reddit mods don't be weirdly belligerent challenge (impossible)
It's all good tho, I misread your tone (and I didn't report the eugenicist because I was half asleep)
i like the way you owned misreading by the way, normally this exchange is an invitation to a five hour seminar on my flaws as a reddit user, moderator, and human being
I mean, the problem here is that the model isn't actually showing us anything about what the correct actions to address black mens' health outcomes are. It's just further baking in a problem.
In this particular example, we know that part of the struggles black men face getting care is that their reported symptoms tend to not be taken as seriously and they get triaged too far back in line, leading both directly to worse care outcomes and to a reduced engagement with the medical system. This happens due to racist-but-still-comprehensible reasons like increased assumptions about drug-seeking behavior and for some batshit where-did-that-even-come-from reasons like a belief that black people experience less pain for some reason. If you train an AI on a dataset based on the results of those assumptions, you're going to get a model that continues to hurt black men the exact same way: by putting them farther back in line than they should be. The only difference is that where you could watch an ER nurse and ask them about their triage decisions to get a sense of where these biases are coming from, the AI model is totally opaque and there's no way of differentiating whether this is actually an optimal result and the optimal result just sucks or whether the algorithm works but there are external issues like access and funding, or whether there were biases and patterns in the training data that are now baked into the AI process itself.
Computers are better than people in that they are cheaper and more consistent, not in that they are overall less prone to bias or errors. As AI/Machine Learning/Predictive Modeling systems become more widely available, it is a SERIOUS problem to figure out how to make sure they don't just make the same mistakes we make now but consistently and forever.
As others have pointed out, biased data IS creator bias, because creators determine data sources. Bias doesn’t have to be intentional; it can be unknowing blind spots that shape assumptions.
There's actually several levels of bias that all end up compounding each other. There's the bias in data collection, and the bias in what data to use to train the ML algorithm. And that's *before* we get into what question you ask the ML to look at.
There is an old Media Lab koan about this exact thing:
>In the days when Sussman was a novice Minsky once came to him as he sat hacking at the PDP-6. "What are you doing?", asked Minsky.
>"I am training a randomly wired neural net to play Tic-Tac-Toe."
>"Why is the net wired randomly?", asked Minsky.
>"I do not want it to have any preconceptions of how to play"
>Minsky shut his eyes,
>"Why do you close your eyes?", Sussman asked his teacher.
>"So that the room will be empty."
>At that moment, Sussman was enlightened.
What is there to comprehend? You straw-manned the point because you’re a giant douche who complains about anti-racism campaigns in Canada coz you have nothing better to do than make life harder for marginalized communities. Seems I understood your point perfectly.
Nobody said “all data is racist”. People are saying “garbage in, garbage out.” If you feed an AI an example of a systemically racist society and say “make our systemically racist society even *more* efficient at being systemically racist”, don’t be fucking surprised when the AI does exactly what it is fucking told.
Tell me then, how does “science” account for racism in America? What’s the equation?
Accounting for racism is a messy, human process. Which I guess is why self described “rationalists” (who I don’t find particularly rational, just intellectually overconfident) dislike it so much.
You could start by learning American history. You might be surprised to learn that black peoples suffered under literal chattel slavery as recently as 150 years ago. For many decades after they were terrorized by angry white men, especially in the south. We still have defacto separate but equal public schools by how we ridiculously fund them via local property taxes. I haven’t even mentioned the school to prison pipeline, redlining, environment racism… the list goes on and on and on. You need to have your head in the sand and watch Tucker Carlson every night to think our society has nothing to account for.
He’s a terrible communicator who is convinced that he’s witty and erudite. It’s genuinely hilarious how many times the obvious follow up to his tweets are “what the fuck are you even trying to say?”
It's an extra step added to the "Well, if you look at it, blacks commit most of the crimes" dance for justified belief in racial disparities in outcomes.
In this case I think most of your are wrong in that I think he just means efficacy in specific problems rather than some futuristic application of AI that would have politically actionable biases or whatever.
The reason why this is ambiguous in the tweet is that he isn't confident enough to give specific examples because it's not really his field.
Every now and again he says something insightful, this is not one of them.
I understand the pettiness and envy worming through that man's heart, but surely he must know that commissioning art and watching the process unfold and getting to hold a finished work in your hands, knowing you've been part of someone's climb to ever greater skill, is one of the best feelings bestowed on this universe.
I thought this was about the various “ai chat assistants” that get launched by various companies who within minutes start spewing racist things. Clearly that means they’re onto something!
To be honest, with the new high parameter multimodal AI models and the fine tuning that’s going into these algorithms a lot of “commercial” artists like graphic designers will probably find themselves out of work in the near future. In the medium to long term it’s a bit harder to say how effective ai will be at making art.
I really enjoy that the techbro pitch for 'AI' was that white collar busywork will be automated away so you'll have more time for creating art; a few years pass, and automating art is one of the few real-world uses of artificial 'intelligence'.
Also AI will never be effective at making art because to make actual art you have to be sentient. 'AI' 'art' just never appeals to me because I know there's no actual thoughts or feelings behind it (though I feel the same about the overly polished Artstation stuff or these hyperrealistic pencil drawings of an eye, tbf)
It's telling that I'm more amused by the prompts that people give to DALL-E than what DALL-E actually produces.
"Cookie Monster doing burlesque" is art regardless of what the actual output is. Good art, bad art? I can't tell, but someone decided that it needed to exist and created it. The images by themselves are nothing; they're just flavor to punctuate the prompt. DALL-E could totally fuck it up and I'd still think that the prompt is good.
Ooh well said, that puts words to how I’ve been feeling about Dall-E. It’s like how people were super impressed at early AIs in video games but now we just take them for granted, and PVP is still where you go even if AIs can play better than any human. Hardly anyone watches the cyber chess championships on Twitch, compared to hikaru’s streams for example, even though the worst modern chess computer software would roll the floor with any human.
I have seen the gaming world go 'pcg is the future' and them go 'well not really' so im a bit skeptical on that front.
And I mentioned art because some art contest was won by a program recently and the work is a bit controversial. (And tbh I thought the art looked cool, but only if you didnt look to long at it, and it also has other issues (it (the ai) does the same as orientalist art did, mix and match various styles without knowing any context behind the styles and without any coherent idea)). But that is just a sidenote on the whole art contest thing not even sure it is about that.
I’ll give Paul some credit here - Emperor’s New Clothes is actually a pretty good analogy here!
After all, it’s a cautionary tale about how a group of self-described experts was deluded into thinking they saw something significant because it was sold to them as a way of indicating that they were superior to the rest of society.
What does this even mean? Or are the emperor’s new clothes here a metaphor for not blindly accepting bigoted outcomes even though we’re getting them from a really complicated algorithm made by the people who benefit the most from the status quo rather than getting it from those people themselves.
This is going to be some anti artist take isnt it?