That's pretty clearly an example of engineering though. As an engineering student, I agree that ethics are embarrassingly ignored, but it's a far reach from theoretical mathematics. OTOH, the article was clearly not taking about discoveries in pure mathematics.
Fry said she got a sense of the ethical blindspots scientists can
have while describing to an academic conference in Berlin the computer
modelling of the 2011 riots she had done for the Metropolitan police.
The audience, which understood the realities of a police state, heckled
her from their seats.
When Fry returned to London, she realised how mathematicians,
computer engineers and physicists are so used to working on abstract
problems that they rarely stop to consider the ethics of how their work
might be used.
This article is just a faux mea culpa from someone who did
obviously unethical work and generalized her contribution to
mathematicians at large. Obviously it is impossible for people doing
basic research to comprehend fully and in detail the potential
applications of their discoveries, but if you’re hired by the
police…
If people realized how close humanity had come to a global disaster
with nuclear weapons in the past 80 years they’d be a bit less cavalier
about the ‘benefits’ of technology outweighing the ‘costs’.
The danger in reading the article is that the commenter would be obligated to ask himself what possible points might arise from it, and change his counterarguments in the event that any unanticipated arguments appear.
There’s a lot of “Scott Aaronson”-esque nerd narcissism one of the
top comments, whining:
Sure politicians and CEO’s can do whatever they want, but the new
tech graduates, they’re the real problem.
Who the fuck literally thinks the politicians and CEOs backing these
kinds of technologies are off the hook?
They’re being willfully obtuse the way Aaronson made some post
railing on feminists for calling out nerds for their attitudes towards
women, ignoring how feminists have been calling out the same damn
behavior in fratboy chads for decades as well.
Come to think of it, this is also the same mindset behind “but All
lives matter!” Yes, all lives literally fucking matter, but society’s
actions toward black people strongly indicate they don’t think black
lives matter at all let alone as much as all other lives.
Lurker here. I don’t really get the hate: The quoted paragraph is
ridiculous, but clearly misphrased and taken out of the context of a
larger post.
Afaik the Hippocratic Oath applies to practitioners, not researchers,
and I think that’s what the poster tried to convey. He clearly has an
issue with holding researchers in maths to the same standard, but adds
that “applications of maths, perhaps”.
In that regard, this doesn’t sound unreasonable - although probably a
misunderstanding of the article, which seems to focus on applications
indeed.
I mean, that’s kinda true though? When you’re trying to solve a very
specific and niche problem, how could you possibly forsee that a bad
actor will have a use for it later down the track? How many steps
removed from actually causing harm do you need to be before you’re
acquitted of any potential wrongdoing resulting from your work? Like
tribesmen who invented spears to go fishing, and then someone else
realized that they could use them on other humans too. I guess they
should have known better than to invent the spear at all. The Chinese
taoists who invented Huoyao as a kind of experimental medicine
inadvertently created probably the most widely manufactured substance
for killing that there is. Are they liable for not stepping back and
considering the Wests’ use of the substance for imperialistic
purposes?
You can only make these kinds of moralistic judgements in hindsight.
Einstein discovered a property inherent in the universe, and Oppenheimer
expanded upon it to cause harm to people. Should Einstein have kept his
ideas to himself?
>You can only make these kinds of moralistic judgements in hindsight.
That's a silly take. There are plenty of technologies where the harms are evident, and working on them is evidence of moral bankruptcy. It's arguable that anyone working on machine learning right now, for example, needs their head checked.
>It's arguable that anyone working on machine learning right now, for example, needs their head checked.
I agree to the extent that the person working on machine learning is tasked with building facial recognition technology without any attempt to correct the biases built into the system e.g. how the corpus of facial recognition data is of white faces hence a lot of software is actually pretty bad at detecting people of color unless the buyer is deliberately targeting people of color.
Where would you put someone working on ML because s/he wants to create a software failsafe so people don't die from inattentive/asshole drivers?
oppenheimer certainly should have
of course that comes up against the real problem of professional ethics, which is that barely anybody will actually follow them if it requires hurting their material position in society. Like, seriously considering how shitty the effects of their tech are would require most of the people in e.g. adtech to quit, but they can just decide not to think about it and keep their jobs.
imo though a paper-thin sham of a code of ethics is better than no code of ethics at all. gives you something to cite when you do refuse to work on something, at least.
> Like tribesmen who invented spears to go fishing, and then someone else realized that they could use them on other humans too.
\[citation needed\], lmao if you think humans haven't been murdering each other since we were rats
>\\\[citation needed\\\], lmao if you think humans haven't been murdering each other since we were rats
Then what tangible good is a code of ethics going to do? The country of the stemlords who adopt it will be eaten by the countries of those who don't. This is what mutually assured destruction has accomplished.
“Once the rockets are up, who cares where they come down? That’s not my department!”
“You expect us to consider the consequences of our actions? Fuck you, our best work has incredibly far-reaching negative consequences!”
“The danger of considering the risks that might arise from research is that it could lead you to consider risks that might arise from research.”
I mean, I guess that is true? In the same way that the danger of not murdering people is that it could lead you to not murder people.
This article is just a faux mea culpa from someone who did obviously unethical work and generalized her contribution to mathematicians at large. Obviously it is impossible for people doing basic research to comprehend fully and in detail the potential applications of their discoveries, but if you’re hired by the police…
If people realized how close humanity had come to a global disaster with nuclear weapons in the past 80 years they’d be a bit less cavalier about the ‘benefits’ of technology outweighing the ‘costs’.
There’s a lot of “Scott Aaronson”-esque nerd narcissism one of the top comments, whining:
Who the fuck literally thinks the politicians and CEOs backing these kinds of technologies are off the hook?
They’re being willfully obtuse the way Aaronson made some post railing on feminists for calling out nerds for their attitudes towards women, ignoring how feminists have been calling out the same damn behavior in fratboy chads for decades as well.
Come to think of it, this is also the same mindset behind “but All lives matter!” Yes, all lives literally fucking matter, but society’s actions toward black people strongly indicate they don’t think black lives matter at all let alone as much as all other lives.
Lurker here. I don’t really get the hate: The quoted paragraph is ridiculous, but clearly misphrased and taken out of the context of a larger post.
Afaik the Hippocratic Oath applies to practitioners, not researchers, and I think that’s what the poster tried to convey. He clearly has an issue with holding researchers in maths to the same standard, but adds that “applications of maths, perhaps”.
In that regard, this doesn’t sound unreasonable - although probably a misunderstanding of the article, which seems to focus on applications indeed.
I mean, that’s kinda true though? When you’re trying to solve a very specific and niche problem, how could you possibly forsee that a bad actor will have a use for it later down the track? How many steps removed from actually causing harm do you need to be before you’re acquitted of any potential wrongdoing resulting from your work? Like tribesmen who invented spears to go fishing, and then someone else realized that they could use them on other humans too. I guess they should have known better than to invent the spear at all. The Chinese taoists who invented Huoyao as a kind of experimental medicine inadvertently created probably the most widely manufactured substance for killing that there is. Are they liable for not stepping back and considering the Wests’ use of the substance for imperialistic purposes?
You can only make these kinds of moralistic judgements in hindsight. Einstein discovered a property inherent in the universe, and Oppenheimer expanded upon it to cause harm to people. Should Einstein have kept his ideas to himself?