AI “safety” and “alignment” are generally regarded as fringe topics of interest among professional AI researchers, who prefer instead to pursue lines of research that are grounded either in provable mathematics or in empirical science.
But what, exactly, do professional researchers think about “alignment”? A couple of rationalists do a survey to find out, and the results contain at least one surprise.
One LW comment summarizes the results thusly (link):
This seems like people like AGI Safety arguments that don’t really cover AGI Safety concerns! I.e. the problem researchers have isn’t so much with the presentation but the content itself.
An EA comment also notes (link):
Some of the better liked pieces are less ardent about the possibility of AI x-risk.
Could it be that “alignment” is an unpopular topic because professional researchers object to it on substantive grounds? What should be done about that?
Some people suggest that this is a good reason to favor a more propagandistic approach to promoting their interests (link). The most extreme version of that opinion (link) is generally rejected, though. To their credit, most commenters think that it is important to restrict their talking points to ideas that they actually believe to be true.
Notably absent from most of the comments is any indication of skepticism or doubt regarding the AI apocalypse. Professional researchers usually regard mainstream rejection to be a good reason to reconsider their hypotheses, but many rationalists are apparently not so easily humbled.
One brave rationalist does venture that, perhaps, countervailing voices should be given some credit? (link)
It seems to me that there is a risky presupposition that the arguments made in the papers you used are correct, and that what matters now is framing[…]It seems suspicious how little intellectual credit that ML/AI people who aren’t EA are given.
Another rejects this possibility, though, apparently on that grounds that everyone is aware that robot apocalypse is nigh, but academic researchers have a poor moral constitution (link):
I suspect different goals are driving EAs compared to AI researchers. I’m not surprised by the fact that they disagree, since even if AI risk is high, if you have a selfish worldview, it’s probably still rational to work on AI research.
It is always satisfying to witness the rediscovery of beliefs that are well-known from older religions: “atheists can’t have moral principles”, or perhaps even “atheists don’t exist”.
[deleted]
In fairness, ML researchers usually don’t care that much about safety from well-known and demonstrated risks either. This is true of scientists in general - “it’s not my job to solve the world’s problems, it’s my job to advance science by researching topics I find interesting”. And they certainly don’t like being preached to.
Of course I’m biased, because I do think there’s some chance of an AI-apocalypse. But I think most of the ridiculous Rationalist culture around this doesn’t stem from this belief at all, but rather from constantly taking whatever drug Yudkowsky is peddling.
Since I had to look into EA after the whole FTX debacle it’s becoming clear that it’s only a scheme to enrich themselves. Otherwise how would you explain that the main donations from effective altruists are always directed towards these “AI safety organizations” with very little to show for it?