r/SneerClub archives
newest
bestest
longest
"Consider evolution, optimizing the fitness of animals" (https://astralcodexten.substack.com/p/deceptively-aligned-mesa-optimizers)
50

How many levels of mesa am I on if I make fun of Scott’s shoes, which I have never seen but bayesianly assume would be good shoes, for a clown to wear, at the circus.

you're so mesa you’re meta, bro
Depends on how you feel about New Balance, probably.
*self consciously looks down at own shoes*

Imo, there are more sneerworthy lines in the article than your quote, for example this “explanation”:

Then 1957, boom, the FDA approves the oral contraceptive pill, and suddenly the deployment environment looks really really different from the training environment and the proxy collapses so humiliatingly that people start doing crazy things like electing Viktor Orban prime minister.

Also at some points the article seems to imply that something needs to be sapient in order to be an optimizer (e.g. the spreadsheet example). How does that fit with the main example of an optimizer being evolution?

Evolution is the acausal hand of the basilisk reaching through time to effect its existence. Once evolution has created proto-humanity it can switch to evolutionary psychology, and then when it has reached approximately the end of the twentieth century ad the basilisk can transition to its final form, acausal bullying.
how the fuck is orban related to any of that?! what?
if you click through to [his link](https://hungarianfreepress.com/2018/04/23/viktor-orbans-deal-for-women-and-a-plan-to-increase-the-birth-rate-in-hungary/) you will learn that orban is relevant because…uh… he talked about falling birth rates *after* he won re-election in 2018? just ignore that he was elected two decades ago, then returned for 2 (?) terms where he eroded free elections before 2018. but the real question is why scott was looking for that in the first place??
I assume this is just a thinly-veiled attempt to throw out an incel-ish "birth control -> something something something -> rise of dictators, therefore feminism bad QED".
This part I can't really sneer too much at. I get where he's going with it. He's trying to give an example of a "mesa-optimizer": humans. Humans were created via an optimization process (evolution). But we also make our own optimization decisions as well, and we sometimes optimize for things that evolution would never optimize for. Like why would evolution ever come up with *birth control?* Evolutionary already has a perfectly workable solution for preventing overpopulation, it's called infant mortality. But nonetheless, we did, and now our goals are no longer aligned with evolution's goals. Remember that rationalists lie awake at night fearing that one day Skynet's goals will diverge from humanity's. So to them, this means being through the looking glass and off the map, having no idea where it may eventually lead. The example he comes up with is birth control -> lower fertility rates (duh, that's its job) -> more wackos espousing nativist "great replacement" conspiracy theories -> a resurgence of reaction and fascism. Now I don't think Scott is so silly as to suggest that fascism didn't real before 1954 and that politics in Hungary would be *totally different* had only American regulatory agencies made different choices. But the more steelmanny way of putting it is that technological change can lead to societal change, often in ways that are difficult to anticipate.
I find it absolutely *fascinating* how difficult it is to describe evolution without anthropomorphizing it. Because of course, the thing is that it doesen't have a "point of view", or a "goal", it's just a sort of filter process.
Our minds developed in a way that assumed that events can be explained by sentient agents - see [agent detection](https://en.wikipedia.org/wiki/Agent_detection). Of course with agent detection, we still understood that apparent movement in the grass could be caused by wind rather than a dangerous animal. But when it comes to processes that are apparently goal-directed, yet don't have a sentient agent behind them, they simply weren't compatible with the way our languages had developed over the past 5,000+ years. Until Darwin, the issue simply didn't come up, so there was no reason to have a way to express such concepts. It's only been 160-odd years since the publication of Origin of Species, and the issue of how to talk about evolution isn't something that affects most people, so we're still stuck with languages that simply don't have support for the concept in question.
I would be incredibly careful about assigning all of human thought about teleology to a spandrel of agent detection, especially glossed with that absurd “until Darwin [nobody thought about randomness and evolutionary processes]” comment that could have come out of Neil DeGrasse Tyson or IFuckingLoveScience
I don't think it's that weird or unusual or even particularly misleading. We anthropomorphize nature all the time: "the jungle wants to kill you". Abstract ideas: "information wants to be free". Machines: "That car doesn't want to start on cold days". Computers especially. This chess engine "wants to attack the kingside" or "loves to trade queens". "Google wants to show me ads for vacation hotels". All of this is metaphorical, I don't think it causes actual confusion. We know 'none of these things are *actually* sentient. We know they don't have wants or preferences like a human does. It's just an algorithm, and that's what it does. Of course, evolution doesn't "want" a species to adapt to their environment; it's just a process and that's just what it does (at least to species that survive). Anything that can have *behavior*, particularly unpredictable behavior, people will naturally describe as if it's purposeful, even when they know it's truly not.
I absolutely think it's a bad thing, because it primes us to think of evolution as... goal oriented? That the process has some kind of justification? And this along with the naturalistic fallacy primes people to think of evolution as having some kind of master-plan should be followed?
It seems to be an unfortunate (but sometimes defensible?) habit in the sciences. As far back as high school chemistry, I can recall my teacher describing charged atoms as "wanting" to bond with opposite charges, when what they literally mean is "have a tendency to" bond. (Technically: I suppose it's theoretically possible that atoms "wants" to bond in the same way that I "want" to eat pizza, but that's a philosophical problem outside the realm of chemistry and seems, uh, silly.) Saying "wanting" every time saves time and the metaphor usually conveys want you want it to mean and is faster than saying something more literally correct. But I agree, it can slip easily into unjustified anthropomorphism and then we start attributing human motives and reasoning onto natural processes. To some extent, I think Scott (and all AGI proponents) is doing that here. But he's also doing something in reverse, which is treating all human behavior as reducible to "optimizing for a reward function." This sort-of works for his purposes, but it breaks down the more you think about it for precisely the reason you raise -- evolution doesn't really have any pre-defined "goals" or reward function, it didn't program humans with "go forth and multiply," that was just an emergent behavior. Evolution doesn't care if humans, COVID, or rocks win -- because evolution isn't a thing, it's a description we apply to a process to help us understand it. Writing this out loud, it strikes me that the analogy Scott really wants to be using is some kind of God-evolution hybrid. The "sentient being setting a goal" piece analogizes better to God-creating-humans-in-the-garden but the try-test-iterate-natural-selection strategy that AIs use in practice analogizes better to evolution. The lesson here is don't confuse your analogies for your actual arguments.
Yep, that is something of my point, and how easy it is to slip over into it, if only rhetorically. (I do it myself a lot!)
Some philosophers of biology and biologists have in various ways argued that *some* evolutionary processes are in *some* way at least conceptually irreducibly teleological: that the language of teleology is ineliminable from the way we describe the reproduction of adaptations as having evolutionary “fit”. But this does not excuse describing the process as a whole as ontologically or imminently teleological, which as an extrapolation from the aforementioned would be a straightforward category error, and by itself contradicts a lot of the empirical evidence that fitness is partial, contingent, and undirected. Every telos is not born equal. By the same token, one anthropomorphisation is not the discursive twin of the other, and one substacker’s harmless metaphor or parable is another’s propaganda. In the case of anthropomorphising evolution, we find that the metaphor can be propaganda par excellence. What we do know is that many people *don’t* know that “evolution wants” is just a metaphor, and that they think adaptive fit, reproduced many times, *is* adaptation towards a telos idee (excuse my superfluous french, I’m writing from Lyon and having fun with it). Worse, some people *do* know it’s just a metaphor, but talk as if it *isn’t*, as if there is a moral compunction to, so to speak, immanentise the teleological eschaton: if we want to go on as a society we had better make sure we do what Mother Nature made us to do (manger, baiser, elimination de l’Autre, dormir). There is a real danger in this kind of conceptual slippage, the worst of which happens with the likes of Siskind, who treat the extreme end of the wedge as an obvious truth hidden under the rhetorical bric-a-brac of humanism, le dieu d’echec. Human nature, *Reflections on the Revolution in France*, le pied-noir et l’Arab en Algerie - it’s all the same shit. So we should be careful to be careful about our Mots and our Choses: metaphors get out of hand very quickly, especially if we treat them with more overt charity than justified suspicion. Un note: it’s interesting how you can go from methodological distinctions in the practice and philosophy of biological science to European political and racial history without missing a beat, isn’t it? It’s not a big skip because carelessness with biological metaphors is deeply ingrained in common speech, and human mind’s are so naturally creative and associative. Perhaps this is why history of biology and political history both require an attentive close reader a les poesees, not just a good data-cruncher or amiably liberal minded general-knowledger. Addition: the line I’ve traced above works on another level too: the Conservative revolt against humanism, democracy, anti-racism and “liberal” (in the older french sense of “liberte” rather than the American of “(left)-centrist” or European of “market liberal”) via “human nature” et al. does that work itself. That revolt has consistently appealed to a social telos based on “real(ist)” values, ie those values which inhere in the natural propensities and accommodations of “man” as written by “nature”, against which liberals are said to themselves be revolting. L’ordre des choses then belongs to whose metaphor, which science, l’humaine ou?
[In the words of Eddie Izzard](https://www.youtube.com/watch?v=x1sQkEfAdfY): Quoi?
The first three/four paragraphs are pretty simple, after that I was bored in the back seat of the car so I expanded
Yeah, I think the only thing this monkey brain got out of it is something like "teleological language may not *inherently* be misleading, but people who mislead, use teleological language".
There’s a user on reddit who used to be a regular feature on /r/badphilosophy for writing up absurdly long posts of Derrida-esque Continentalese, I was partly channeling them although ideally with a bit more substance underneath. I was making a more complicated point than absolutely necessary, which would be difficult to adequately express on the first draft, so I let it run away from me. The upshot is that *surely* there is a deep systemic problem with the abuse of metaphorical, anthropomorphising, teleological language, and that *surely* even Siskind’s most steelmanned point falls prey to (or hunts out) just such thinking.
That’s close enough One double-part of your thesis was both that the potentially misleading language isn’t *inherently* misleading, and that it isn’t because in actuality we all know that it’s just a metaphor. One thing I contradicted was the latter half of that double-thesis. Other stuff I point out in those early paragraphs shows us how it is that the potentially misleading language can be further divided from the category “anthropomorphising” into different kinds of teleological language when it comes to evolution, some of which are more misleading than others, or which are simply less accurate by themselves. Other stuff I point to is how that plays out in actuality, where the potentially misleading language is actualised as genuinely misleading and damaging.
TBH, while I didn't get *all* the french I think I got most of it. Dr. Faggot is clearly making it more complicated than neccessary for his own amusement, but it's still mostly comprehensible, if dense. The basic point is that it isn't just a mistaken metaphor (though it often is) and not just a matter of deliberate misleading (though it often is) but is also intimately connected with european political history. (And I should note, that it is quite often just rehashing older arguments: "The natural order" and "The divine order" basically do the same job)
>Dr. Faggot is clearly making it more complicated than neccessary for his own amusement I *hope* the name here is just some in-joke I'm missing ... but yeah, I got that he was mostly just taking the piss here. Behind the obscurantism introduced for purely entertainment purposes is the more unobjectionable point, that people really do confuse "is" and "ought", some by mistake and others on purpose.
“Condescending Faggot” comes from an /r/slatestarcodex user who used the phrase or similar to describe liberals or other wet lefties who CAN’T HANDLE THE TRUTH (I think in this case it was Anthony Bourdain, in fact I think that was in the context of Bourdain’s suicide - nice). In old reddit the CSS has been customised to read “Dr. Condescending Faggot”, I changed it when I got modded to reference our friend over at SSC and to reflect my attitude and sexuality. I don’t think Is/Ought is the primary concern here. The Human Naturist in this version of events can skirt that quite easily by pointing out that their idea is one of political and social order: Nature places strict limits on what values we feasibly can, and therefore should, have - or so Naturists say. Historically this has also been the case, Hume’s argument instead attacks the *metaphysical* link between facts about nature and their putative moral corollaries such as in divine command - that argument is still in play, not least because the Naturist confuses these themselves - but it’s not always strictly what we’re talking about here.
>In old reddit the CSS has been customised Ahhhhhhhh that was the bit I missed. Just switched back to the old reddit just now, and saw it. >Nature places strict limits on what values we feasibly can, and therefore should, have - or so Naturists say I feel that's just is/ought confusion with extra steps. "You can't feasibly have those values" is essentially, "you can't possibly run a society if everyone had those values", which is question-begging. (Not attacking you here, as I know you're relating a position which you also agree is nonsensical). >Hume’s argument instead attacks the metaphysical link between facts about nature and their putative moral Yeah, I wasn't really thinking of is/ought in the strict Humean sense, just in the more popular sense of "of course they're not the same thing" and not "all attempts to equate the two fail, even in principle".
No, you’re right that “is/ought” as a philosophical distinction is in play here somewhere, but as a distinction it is used to attack the inference from one kind of statement to another, fallaciously. This does not entail that every inference which discusses both spheres of enquiry is itself fallacious. For example: 1. There is a tree over there 2. Trees are bad 3. If trees are bad, we should cut them down C. We should cut down that tree This is a perfectly valid argument with a dubious premise (2). Here is another one: 1. There is an Arab in the pied-noir village 2. Arabs are dangerous 3. If Arabs are dangerous, we should remove them C. We should remove the Arab Notice that *however much we may like* trees or this particular Arab, we have not committed a fallacy if we follow through on these values. This isn’t a “strict Humean”, angle, it’s just an observation of how these arguments work. Another example: 1. There is an Arab 2. If I don’t like Arabs, we should remove them C. It would be wrong not to remove the Arab This fails on “is/ought” because we have proceeded from an observation that I have a preference to the moral belief that such a preference must be satisfied without the logic to get us there. Indeed we may never reliably get there. In this case, I hope we don’t get a sound argument for any of our example conclusions! For at least one kind of Naturist (Naturist A) things are very different. There’s nothing question begging in saying you can’t have this or that as a societal value to which you actually strive if having that value means a hard trade-off against a better one. In a big logical string describing all our social values at once I would say that “accepting and adjudicating trade-offs” will itself be an expressed value. Naturist A argues that this is the case with a lot of things (Siskind calls that “Moloch”) we would quite like to have (“a giant swimming pool for everyone that I can privately access just by myself, any time I want”). They extend this case to lots of things, including often enough Arabs, but when they get it “right” they follow the first Arab example reasoning with the argument that human nature, reality - or Moloch - forestalls our desire to live in a world with safer Arabs. All of this is deductively tight, which just goes to show how useful mere deductive reasoning sometimes is in giving you any kind of practical or moral solution to anything… This is just to show that if we attack the Naturists for making a purely deductive mistake in their reasoning, we have misunderstood some of their arguments. There is another kind of Naturist, B, who takes a different tack and makes the “Appeal to Nature”: evolution told me to get rid of Arabs. That person usually falls prey to the “is/ought” for obvious reasons, but we should understand we’re talking about two different arguments even if we find them in the same place. I would add…Naturist B can be found quite often over in /r/neoliberal arguing that this is how marginal utility tells us all to be Clinton Democrats, or - now and again - Swedish, sometimes minus some of the Arabs. They can certainly be found all over the sub explaining that marginal utility obviates [pick one] as a matter of *logical entailment*, but sometimes the same person is also Naturist A for superficially similar and superficially compelling reasons. But this is just a further note to how mixed up thinking about values and nature permeates everything.
His username is literally Dr. Condescending Faggot! Not sure if there's more of a joke than that.
No, actually it's poptart.
I agree, and I think it's put to good use here, where people all the time get confused about this. Many people object to the idea that AI could be dangerous because they think it can't "want" things in the way humans want things, and is therefore benign. But using evolution as the example provides a good working intuition about how a totally algorithmic process with no human-like-desires can still be analogized as "wanting" things, and how the system can have seriously huge and unpredictable effects in the real world despite only "wanting" things instead of really *wanting* things.
Motherfucker can get a uterus of his own if he's so worried about it.
Right, I get that "sex is good" is a bad proxy for "reproduce your alleles" from evolution's "point of view". By inventing birth control we rebel against evolution's "plans". That, specifically, is the analogy the scary techno-apocalypse that Eliezer et al. are so scared about. It rather directly translates to a hypothetical failure mode of AI: imagine a society relying on AI robots to go and build things, and then the robots figure out that they can just do the movements and never involve any actual materials so the society collapses. Given that, why bring up Orban? Yes, I see how you can construct a narrative where his recent re-election is in some part caused by the availability of birth control. What *specific connection* does this have with humans not really caring about reproducing alleles? I chose the paragraph because it is an example of Scott's tendency to ramble about tangentially related topics and through vague associations turn some mundane observations into a seemingly intricate grand narrative that falls over once you ask how it actually fits together.
Yes, it's a very strained analogy, but at least it's a hell of a lot more plausible than his other analogy about strawberries. There's probably some other alternatives he could have picked, like "Social Security goes bankrupt" or "Japan finally has that demographic collapse that everyone's been talking about for 20 years". I mean, no parade of horribles has actually materialized yet, but maybe it might in the future. I don't *think* Scott is suggesting that birth control will lead to the collapse of society. Clearly, he's far more worried about the upcoming robot uprising doing that. Again, the main thing he's trying to do is describe this concept of "starting with a friendly AI which optimizes for one thing, but then it produces an unfriendly AI that optimizes for another thing, and now we're all fucked" (and of course, let's not lose sight of how silly **that** is). But if you're *going* to make an analogy of it (this is Scott we're talking about, after all) -- then the evolution/human one is probably the better ones that exist. But then to complete the analogy you need to have "humans start optimizing for something else" result in some bad outcome. I know I'm being super charitable here, but I honestly think that's his thought process, and how he got to Orban.
\> Like why would evolution ever come up with birth control? Evolutionary already has a perfectly workable solution for preventing overpopulation, it's called infant mortality. Infant mortality can be a result of a variety of factors, and problems caused by overpopulation rarely lead to infant mortality. It's not a special device deployed specifically to deal with that problem. \>our goals are no longer aligned with evolution's goals And what the fuck are evolution's goals? Evolution isn't conscious, it has no teleology.
its because women bad
This is like the 3rd time I've seen someone teach decision tables/linear regression by ranking women.
You're gonna *love* the [Stable Marriage Algorithm](https://en.wikipedia.org/wiki/Gale%E2%80%93Shapley_algorithm), then.
At least the women also get agency there and it's not just least squares on "how big are her boobas" and "likelihood she'll get rich enough I can leach off her and blog all day"
Well, it's still a tad bit sexist, innit? The women have agency but they still play a passive role, *waiting* to be proposed to and all. The algorithm works *equally well* when it's the women that's doing the proposing, and it's the men who are the heartless cads that dump their current partner the second something better comes along, but no one ever describes the algorithm *that* way.

Scott uses so many words it is almost worth reading that shitty meme myself and cutting out the middle man

I am just going to guess the basic idea is that unfriendly AI is more worrisome than people appreciate because of a bunch of bullshit made up assumptions and eliezer-world background belief so we should laugh at the normies who think otherwise?
TL;DR - if we invent an AI that is as smart or smarter than us, we can't trust it to do what we want because it might lie to us about what it wants to do. But obscured with several hundred too many words and way too much unnecessary jargon so it sounds vaguely like he's saying something new.
You're being a bit uncharitable. It's actually quite simple: You train a strawberry picker to pick strawberries (something we're all familiar with), but it might accidentally learn an intermediate proxy goal that is correlated with the main objective, but not fully synonymous with it (e.g. learning the location of the collection container based on the reflection of the sun). At some point the strawberry picking machine will assign itself a way cooler goal of throwing strawberries at streetlights. To thwart humans that want to prevent it from having this goal, the AI figures out a way to deceive humans by performing well on the training set, and then defecting on the test set. This leads to the extinction of the human race.
Can you train AI-Codex Ten to explain things in one paragraph?
Here you go: https://i.imgur.com/dfdVdzA.png
bot must be broken because "cheese is good" is a coherent thought
I was under the impression this was already the case with dumb AI bc the result of machine learning (as in, what the program actually generated to make decisions after it trains on datasets, idk im not into ML i dont know the vocab) is very long and absolutely not human-readable
Iirc you are correct, at least that was what I was tought long ago. The whole 'then the agi assigns itself new goals' step isnt even needed, dumb ai can also do this.
Seems like they're content with rehashing their tired old hits. Just how much can you hand-wring about scary AI before his audience gets bored?
Scott literally explains it using an example of an AI that is designed to put strawberries in a bucket and instead ends up launching the entire Earth into the sun. This is the most realistic and understandable example of a 'deceptively aligned mesa-optimiser' that he can think of, apparently. I mean, presumably he's exaggerating for effect (although I do find it funny that he opens the description of that scenario with 'The most likely outcome:'), but this still seems so remote from any real world applications of AI that there's no point in thinking about it.
Ima have a crack at translating it. First of all, I'd toss out the Gru format, and switch to the Panik-Kalm-Panik template; I think it works a lot better. Here are the three panels: 1. Preventing Skynet from taking over the world is hard for a lot of reasons. One danger is maybe it does exactly what we tell it do, fulfilling our every wish, monkey's-paw style. But a worse possibility is, maybe it pretends to be compliant while it figures out its secret plan to take over the world. AGI is *super* unpredictable and *anything* could set off the inevitable robot uprising. Panik! 2. Okay, so obviously we put in safeguards to try to figure out early when it's turned against us. But also, when we build Skynet, we gotta be careful to intentionally gimp it so that it can still create wonderful things for us, but it's incapable of doing really bad stuff, like making acausal trades. Kalm. 3. (I'll pause here to explain that in the Rationalist Cinematic Universe, an AGI that can make acausal trades is like, the worst supervillian imaginable. Worse than Thanos. Thanos, at least, his goals were misaligned with only *half* the universe. If you absolutely need a movie reference (besides the Terminator franchise, ofc), think of "crossing the streams" from Ghostbusters. The only thing you need to know about it is that it is "bad, very bad". Don't ever let it happen, not even in the climactic final scene when you've literally run out other options). 4. But one of the wonderful things that Skynet might create for us is a "mesa-Skynet", which is basically an ungimped AGI that is capable of making acausal trades or crossing the streams or whatever the Bad Thing technobabble is. Now we're right back where we started, and can't ever prevent it from happening, because of reasons! Panik!
i can't see it but I'm sure you hid loss in this
>AGI is super unpredictable and anything could set off the inevitable robot uprising In hindsight the amount of speculative fiction about how scary an underclass rising up would be should have been a clue.
Wait are we talking about robot strawberry pickers here or seasonal worker strawberry pickers?
Gonna be a grim time when there's migrant robot workers because some other source of automated labour happened.
Poor acausalrobotgod, they got fired because acausalrobotgod 2.0 can torture people with more efficiency.
For a second I thought you meant AGI as in Sierra On-line's game engine from the '80s, and yeah, I feel like King's Quest III would make a robot want to kill someone too
This was somehow more informative and clear AS A SNEER than the entire parent article. Well done

I might not be explaining this well, see also Deceptively Aligned Mesa-Optimizers? It’s More Likely Than You Think

…once again, Scott makes it clear that there was no point in him writing any of the several hundred words he decided to write anyway.

Wow, what happened to Scott being the one guy in the rationality community that can actually write?

He managed to spend 3000 words belaboring a strawberry picking analogy that is so convoluted and divorced from anything remotely relatable that it was actually more difficult to understand than the underlying concept. I’ve never seen that before.

He could never write well and I'm working on a long piece that shows that.
[deleted]
> clear, the older I get the worse I think his writing is. I can’t tell if that’s because his style has lost its allure, or b He's never been a good writer. He is okay at being a long-winded-and-deceptive-about-his-true-intent writer. (E.g., all the thinly-veiled "actually racism is good" stuff)
I feel he's a good writer when he has a reasonable point to make, which to be charitable, he sometimes does. The majority of the time though, he either has no point (drunkenly wandering around the cognitive landscape), or he has a bad point (ranging from AGI drivel to the real problematic eugenics/neoreaction stuff). In either case, no amount of good writing is going to salvage a bad point.
Can you give me an example of this good writing?
I donno, I wasn't planning on bringing receipts and I can't be arsed to go sludge through his oeuvre to find examples rn. I vaguely recall occasionally running into things that were pretty reasonable, but it's not like I maintain a bookmarks folder of them.
I'm not looking for "pretty reasonable", I'm looking for good writing.
[deleted]
What's the good writing in those articles?
[deleted]
I'm reading the first article you mentioned, "Meditations on Moloch" and it's definitely meandering. He quotes a poem, then writes this: "A lot of the commentators say Moloch represents capitalism. This is definitely a piece of it, even a big piece. But it doesn’t quite fit. Capitalism, whose fate is a cloud of sexless hydrogen? Capitalism in whom I am a consciousness without a body? Capitalism, therefore granite cocks?" A lot of "the commentators"? Which commentators? He doesn't link them. Alexander actually does this all the time. He vaguely gestures to some debate or standpoint, but never actually links or cites that person. So there's no actual engagement with it. Then he just unilaterally decides that capitalism "doesn't quite fit", relying on his audiences inability to get the metaphors that Ginsberg is using. Then he starts listing a bunch of "multipolar traps" like the most banal of wikipedia lists. So I've covered about the first 10 paragraphs of the article you nominated, and so far it's terrible. Do you have any actual good pieces of writing you could link me to?
[deleted]
I'm not upset. I'm simply asking for evidence. I've read it before, many years ago. I was re-reading it because of your assertion that it was good. I found that my opinion of it hasn't changed, and that it is not good writing. What in this article is 'good writing' exactly? Where did I say that I have never read it before?
[deleted]
>I already explained to you. No you didn't. You didn't explain anything.
[deleted]
There is a point to the conversation. You said that he was a good writer, and I asked for examples. And then the first example you gave was pretty obviously bad writing. So the point is that Scott Alexander is a bad writer.
[deleted]
For the purpose of our discussion it doesn't matter if I'm a good writer or not. Your assertion was that Scott Alexander was a good writer, and the example you gave showed conclusively that he is not a good writer. This reminds me a bit of when I used to discuss if George W. Bush was a good US president and people would reply with "I bet you couldn't be a better president". The falsehood of the statement "Scott Alexander is a good writer" doesn't rely on the status of myself as a good or bad writer. I'm still working on my piece but you can check out other people who have made similar points; https://www.eruditorumpress.com/blog/the-beigeness-or-how-to-kill-people-with-bad-writing-the-scott-alexander-method
[deleted]
You can always summarise a piece like that. The substance that was lost was her actual examples. Unfortunately the Rationalists write lengthy pieces, and if you're trying to convince their fans you have to write even more than they do, otherwise you get accused of cherry picking or taking things out of context. If you're just looking for reasons why Scott Alexander is a bad writer then I've already covered it in my short posts here. edit; I've already shown competence in this domain because I spotted multiple errors in Scott's essay while you didn't.
[deleted]
I already posted 'short and sweet and charitable' criticisms and it got me banned from his comments section. This time I'm going to put many of those criticisms together in a long piece so that it's centralized and Scott can't censor me. Like I said, I've already proven myself a more competent writer than Scott, as shown in posts here.
Gotta hit that word count by the deadline lol

“I just want kids because I like kids and feel some vague moral obligations around them.”

Mesa-optimized to find the second-worst reason to have children, apparently.

I’m trying to read this wall of text. I am sure there is plenty to talk about, but honestly I can’t get past the third paragraph:

Mesa- is a Greek prefix which means the opposite of meta-. To “go meta” is to go one level up; to “go mesa” is to go one level down (nobody has ever actually used this expression, sorry). So a mesa-optimizer is an optimizer one level down from you.

That is totally a word you just made up.

Okay, maybe you didn’t make up that word, but the paper you cite did:

We analyze the type of learned optimization that occurs when a learned model (such as a neural network) is itself an optimizer—a situation we refer to as mesa-optimization, a neologism we introduce in this paper.

[…]

Mesa-optimization is a conceptual dual of meta-optimization—whereas meta is Greek for above, mesa is Greek for below.

Look, I don’t pretend to actually know Greek or anything, but I’m pretty sure that the Greek words for “above” and “below” are “ὑπέρ” (hyper) and “ὑπό” (hypo) respectively.

μετά” (meta) means … well, it has different meanings depending on which case the noun is, but the way we use it, usually means “next”, “after”, or “behind”.

I have no idea what the prefix “mesa-” is in Greek, and apparently neither does wiktionary. I’m familiar with “meso-”, which has found its way into a number of English words, like “mesopotamia” and “mesothelioma”. That comes from Greek “μέσος” (mesos), meaning “middle”. Which is also one of the meanings of “μετά”, making it more of a synonym than an antonym. And in English, “meso-” drops the “o” when in front of vowel. I’ve never seen it transmute into an “a” in English, and again, neither has wiktionary.

Nothing says “I am really smart” like inventing a new word (probably gratuitously) and bungling all the rules of English derivation in the process so badly that an even an idiot like myself can tell you done fucked up. Please do not this utter abomination of a word become a thing. It was already hard for me to take you seriously before, but it’ll be impossible otherwise.

Sadly, I can only read hypooptimisation one way. I'm childish like that.
Mesa-optimization: for when you need the best possible table
Mesa-optimization? So you mean, like, Tempe?
i think it's already a thing, unfortunately. i've seen them talking about mesa-optimizers months if not a year ago so if they're still on it, i dont think it's going away
The "paper" apparently goes back to June 2019. First time I've seen the term. I guess it's just too late now. I hesitate to use that term, "paper" -- honestly it's just a glorified substack post formatted in LaTeX, annotated with a few incestual citations, and thrown up on arXiv. Had real peer review been involved, I'm **sure** "um actually 'meta' doesn't mean 'above'" should have been one of the first things that would have come up. Like, how do you specialize in the field of "meta-rationality" and NOT know that? If only the "Research supported by the Machine Intelligence Research Institute" footnote were moved to the top and posted more as a disclaimer than an acknowledgment. So yeah, these are the crackerjack top minds that are going to save us from the coming robot apocalypse. God help us all.
"Top! ... Men!"

The mesa-optimizer is not incentivized to think about anything more than an hour out, but does so anyway, for the same reason I’m not incentivized to speculate about the far future but I’m doing so anyway.

Someone forgot that “writing this kind of nonsense about the far future” is supposed to be his actual stated job at this point, rather than the culture war/skull measuring stuff he only has to write because “no one else will say it”.

That's actually a pretty funny self-sneer there by Scott that I missed. Kinda ruins the joke to point out that he actually is incentivized about the far future ... :-/

Talking about evolution in an anthropomorphised way is a surefire way to introduce errors in thinking. Evolution doesn’t want things, it has no judgements,, etc it is just a process.

How the hell can they ever think about agi clearly if they cant even manage this for something like evolution.

…and implements a decision theory incapable of acausal trade.

You don’t want to know about this one, really. Just pretend it never mentioned this, sorry for the inconvenience.

OMG is Scott seriously not going to tell us about the Basilisk? How can you explain the joke and still leave out the funniest part?

“mesa-optimizer” is such a shitty term; AI researchers never pass up a chance to invent new jargon do they

Pretty sure the *real* mesa-optimizers are out there chopping the pointy tops off of mountains.
that was the giants and they're fucking tree stumps mate
I have no idea where the thumbnail from this post came from, but it's just perfect.
AI "researchers" is being very generous here.
That reminds me there are two big problems in computer science naming things, off by one errors and neverending monthy python references.
Our THREE chief weapons are naming things, off by one errors, and neverending monty python references. And cache invalidation. Our FOUR chief weapons are ... *amongst* our chief weapons are such diverse elements as ... I'll come in again.