r/SneerClub archives
newest
bestest
longest
The Perils of Perfection: Against Technological solutionism (https://www.nytimes.com/2013/03/03/opinion/sunday/the-perils-of-perfection.html)
10

There’s a lot I don’t like about Evgeny Morozov, but this essay gets at some of the general problems common to rationalism, and the sort of STEM-informed view of society that many nerds hold in general.

Sorry that it’s old, but he doesn’t seem to have written many essays recently.

>There's a lot I don't like about Evgeny Morozov I'd be curious to know why. I have a vague recollection of coming across some of Morozov's essays and sort of liking them several years ago, but reading this one today, not so much. Maybe it's because back in, like, 2011, letting some air out of the Silicon Valley hype balloon was a novel contrarian take, whereas by now it's started to feel bit cliché; maybe it's because condensing his thesis down to Times-op-ed size required sacrificing all the nuance and supporting arguments; in any case, he just comes off as a grouchy curmudgeon indiscriminately flinging contempt at any and all attempts to enhance the human condition by technological means, without offering any concrete alternative vision. Now that I think about it, that's sort of how I've felt reading your recent posts on SneerClub. Have you considered that you may have overcorrected for the naiveté of your youthful embrace of Rationalist ideology? The zeal of tech entrepreneurs is a powerful force for change in society. Not always for good, not always even intended for good, despite all their highfalutin humanitarian rhetoric, but the market culls most of the dumb ideas anyway, so sneering at weird soon-to-fail start-ups is a pretty weak argument against the broader phenomenon of Silicon Valley. Talking about context and nuance and complexity is all well and good, but for it to matter you still have to figure out how to do something different from the naive techno-utopians that lets you actually outcompete them in politics or the market or some other arena where winning takes more than just accurately identifying your opponents' weaknesses.
He might be better in his books, which I haven't read yet, but I don't think he makes a strong enough positive case for the things he's lauding in this essay--like partisanship, hypocrisy, ambiguity, etc. It's not easy to make a case for these things, but he's certainly not doing them a service here. > Have you considered that you may have overcorrected for the naiveté of your youthful embrace of Rationalist ideology? lol lol No, I don't take rationalism seriously. Dunking on people is fun. This is light entertainment while I wait for code to compile. edit: do you ever let up on the earnestness btw? Not everything online has to be fully constructive. Incidentally, the kind of political change that I'd like to see opposing Silicon Valley is stuff like the Disney worker strike, teacher strikes, Fight for 15. GPDR is good, hopefully more regulations like that come out. I hope the Tesla workers succeed in unionizing. I canvass, phonebank, watch for ICE vans, etc.
I like to make jokes whenever I can think of a good one, but you are definitely correct that earnestness is my default. I joked around more when I was younger, but that was long before I ever had a Reddit account. I don't know why I so often feel moved to interject my earnest appeals into the circlejerk here ... But yeah ... usually a few minutes of introspection suffice for me to come up with a satisfying answer to a question like that, but I'm drawing a blank this time. Weird. I guess one factor is that you can't be all that reflective and self-doubting while laying down sick burns. You really need to project total confidence in the correctness of your views to pull that off, which makes it pretty galling if you subsequently discover you were wrong. I can't stop being reflective and self-doubting, so I take the coward's way out and try to treat my interlocutors charitably. That still doesn't explain why I bother to post in the first place, though. I don't know ... It just feels right.
> pretty galling if you subsequently discover you were wrong Nope. People make mistakes, people move on. Being right all the time is really not as important as rationalists/nerds-in-general think it is. This is the kind of thing I mean when I think about rationalists having an impoverished value set. Being wrong is bad but it's not the end of the world. Being a good person, understanding other humans and appreciating them, that's vastly more important. For all that rationalists put in a lot of effort to checking themselves, AFAICT they're mostly failures at it. Whenever I learn enough about a field that rationalists like to talk about (AI, overseas giving), I usually end up feeling humbled and realizing that it's something humans aren't good at, period. This is not an attitude I see in rationalists. They pride themselves way too much on their intelligence and think that because they're striving for objectivity, they must be good at it. Same as how a lot of Silicon Valley meritocrats remain horribly racist people. At some point, as I'm sure you've already considered, you have to take a stand and stop doubting if you want to get anything done. It'd be nice if humans could fully commit to a course of action while still remaining in 10% doubt of it, but we don't appear to be wired that way. I've read enough about the history of racism, the effects of poverty on IQ, population genetics and the current state of racism in America to conclude that the connection between race and IQ is basically bunk. As a good Bayesian, it seems to me vastly more likely that humans who continue to support the link between race and IQ, a link they have been promulgating since the first white man lashed the back of the first African slave, that the humans who still promote those views are merely in denial of their racism. Following again the example of their racist forebears, who justified slavery by saying that they were taking care of people too inept to care for themselves. The evidence about racism in the rationalist community is clear enough that I'm going to commit, because I have other shit to move on to. They are racist, and I will dunk on them gleefully. > You really need to project total confidence in the correctness of your views to pull that off, This is also just not true. It's not about confidence in your factual views. It's about confidence in yourself. Standup comedians lie about things all the damn time. Improv people do the same, they literally make things up on the spot. And normal folks will say patently absurd things, because they know it's a joke and they know the other person's going to get it.
I could have been clearer about this, but I was talking about the particular kind of wrongness where factual and moral error become inextricably intertwined, e.g. when Alice tells the truth about something she witnessed firsthand, and Bob ridicules her with cutting insults laced with moral indignation, because the things she said contradict his ideology, and the only way he can integrate that fact into his worldview is by imputing vicious motives to her. But then Bob finds out that she was right and his ideology was wrong. Sure, it's probably not the end of the world for Bob, but it's a damn bitter pill to swallow, as evidenced by the fact that many a lapsed ideologue has admitted finding it preferable to keep their growing doubts to themselves for years before they reached a tipping point, meanwhile maintaining a public facade of undiminished faith. I don't know how to admit that kind of doubt while simultaneously doing the morally indignant ridicule thing. >It'd be nice if humans could fully commit to a course of action while still remaining in 10% doubt of it, but we don't appear to be wired that way. If you need more than 90% confidence in a course of action before you're willing to commit to it, it's probably because the downside risk is really bad. Of course, different people have differently calibrated risk tolerance and decisiveness, and what's optimal for one set of circumstances can be disastrous in different circumstances.
Uh...dude, making fun of rationalists is really not in the same category as leaving a religion. > If you need more than 90% confidence in a course of action before you're willing to commit to it, it's probably because the downside risk is really bad. Let me guess, you're also a terrible dancer.
The continual problem is that technological fixes rarely actually fix anything if the underlying socio -political issues are left unaddressed. It also happens that technological fixes make some people less likely to go after social-political change because either it's too hard or they think technology actually solved the problem
Plus unanticipated side effects. Who could've foreseen climate change until we were well into the Industrial Revolution?
It's simple, we just solve that problem caused by introducing technology by introducing more technology! Neve rmind that solar panels and battery technology rely on a small supply of rare earth elements that require extensive mining and are held by a small number of companies/countries
And that's why I'm an anarcho-primitivist.
Wow. So it turns out that solving real-world problems is thornier than the Singularity Summit, Effective Altruism, etc. led you to believe, and from that experience you humbly conclude that we must ... abolish civilization? Yeah, you definitely overcorrected. This is why the Horseshoe Theory exists.
So charitable! It's a considerably more complicated chain of reasoning than that, with little reference to the rationalists, actually, but I doubt you're interested. Though TBH it's fun picking an ideology that's unimplementable. It's great to have an excuse to be lazy. > This is why the Horseshoe Theory exists. Uh, no, don't lump me in with the fascists. An-prims, like all other anarchists, favor egalitarian societies and the abolition of all hierarchy. They're anti-racist, anti-sexist, etc. Pretty intellectually lazy of you to subscribe to horseshoe theory.
>So charitable! It's a considerably more complicated chain of reasoning than that, with little reference to the rationalists Fair enough, I was being snide there. >Though TBH it's fun picking an ideology that's unimplementable. It's great to have an excuse to be lazy. Huh. "Lazy" certainly wasn't the impression I got of you from all your talk about the real-world activism you're doing. >Uh, no, don't lump me in with the fascists. An-prims, like all other anarchists, favor egalitarian societies and the abolition of all hierarchy. They're anti-racist, anti-sexist, etc. Pretty intellectually lazy of you to subscribe to horseshoe theory. Maybe the Horseshoe Theory isn't quite the right term for what I had in mind. It was something more like, there seems to be a correlation between certain psychological traits and attraction to radical fringe ideologies, that is not entirely dependent on the specific content of those ideologies. This becomes especially apparent when someone who used to support one radical fringe ideology turns up later supporting a different, contradictory one. Justine Tunney is the first example I can think of, but I'm sure I've seen others.
Do people ever actually implement socio-political fixes? :P
Not nearly often enough, and rarely in any significant fashion. :(
These nerds need to do a lot more acid.
I can't remember where, but I'm pretty sure I recently read something critical of Silicon Valley culture, which cited without further commentary the fad for LSD microdosing as one piece of evidence. I don't know if the intent was to paint a picture of drug-fueled decadence or crackpot self-improvement lifehackery or both. In any case, my impression from anecdotal evidence is that (a subset of) the nerds are doing plenty of acid, rationalists very much included. ([Seriously](https://knowingless.com/2016/08/21/421/).) Not sure what Evgeny Morozov would make of it all, but it would certainly be in keeping with the tone of the linked article to ridicule them for it.
Yeah, I'm mostly kidding, unfortunately. If we've learned anything from Joe Rogan, it's that psychedelics don't always fix assholes.
They never fixed me!

Morozov is a decent sneerer:

One of the dominant narratives that I’ve identified in tech debates is this constant tendency to assume that some technology (in this case, facial recognition) is already here, its further spread is inevitable, and all we can do is accept it and adjust our norms—a rhetorical approach that I also describe as “technological defeatism.” Alas, I don’t find “technological defeatism” an appealing option, on either historical or moral grounds.

So you criticize me for engaging with projects that are only at a prototype stage—but then, once the projects are out of that stage and become ubiquitous, you say that, hey, these gadgets are here to stay and we can’t do anything. What’s a good time to scrutinize them, then? Many of the projects that I like—those caterpillar-shaped extension cords—are also at prototype stage. But so what? One of the reasons for getting involved in these debates is to precisely make certain prototypes—those whose underlying philosophies I share—more likely, and some such prototypes—those whose philosophies I reject—less likely. If it’s all up to the market (or, as you put it, the “users”) there’d be no point in technology criticism, which, on my very ambitious reading, can amount to so much more than its current function: gadget reviews.

http://www.slate.com/articles/technology/technology/features/2013/to_save_everything_click_here/to_save_everything_click_here_what_farhad_manjoo_gets_wrong_about_my_book.html