r/SneerClub archives
Marc Andreessen: Why AI Will Save the World (https://archive.is/OIou8)

ChatGPT, take my last blog post and replace references to Web3 with references to AI

I even think AI is going to improve warfare, when it has to happen, by reducing wartime death rates dramatically. Every war is characterized by terrible decisions made under intense pressure and with sharply limited information by very limited human leaders. Now, military commanders and political leaders will have AI advisors that will help them make much better strategic and tactical decisions, minimizing risk, error, and unnecessary bloodshed.

I know, I know, not all software engineers…

But what the fuck is wrong with these nerds brains?

Does he think "bloodshed" is just incidental to warfare? Marc andreesen is probs in the silicon valley "nootropics" department (read: rich druggie) so that explains the blogging.
He's a billionaire
That's why the probably.
I’m barely over a millionaire in assets and have long been into nootropics as a Neuroscience grad student. Been trying to optimize my brain for over a decade now. Yes, 98% of nootropics are junk. But yes, one can aspire to be more productive in their work or hobbies with the use of safe noots. You also don’t have to be a millionaire to order even a year’s supply online of various drugs to self-administer and monitor. Edit: After some digging since I haven’t bought in a while and have a supply to last me through nuclear armageddon, a sample pack of Modafinil and various other -racetams, coupled with well-sourced choline, will run you AT MOST $200 for a year’s supply. Things get slightly more expensive if you do decide to ramp up on a particular drug for an extended period of time. What’s that, a total of $200-$400/yr to fix any issues your therapist and psychiatrist have trouble solving? Worth it, imo. I’m also weird and autistic as fuck, so take what I say with a grain of salt I guess. Marc’s a dork though.
> What’s that, a total of $200-$400/yr to fix any issues your therapist and psychiatrist have trouble solving? I'm sorry, I have a similar background with nootropics, but nootropics don't fix any problems. Therapy and nootropics work at entirely different levels of analysis. Yes, they may make you memorize faster or be more productive, but they will not solve your existential anxieties, your oedipal childhood issues or your lack of confidence due to trauma. Nootropics might also just send you into psychosis, depression or fuck up your neurochemistry for a few months.
I’ve been in academia for a long time now, and I’m well aware of the risks. Browsing r/nootropics makes me sad for others who want a quick fix, when they probably could’ve worked it out with a therapist. My particular academic interests lie in the pharmacokinetics of drugs that help overcome drug addiction (cocaine, primarily), but along the way I’ve come to learn to take as many precautions as possible when deciding to ingest a foreign substance. At the end of the day, I just wanted to point out that you can be an undergrad student, riddled with debt, and still be able to acquire top notch noots. It’s not a billionaire thing at all, whatsoever. That said, I’m not promoting haphazard use, of course.
Concerning the sinister effects of chronic cocaine: I assume you have looked into kappa opioid dynamics? It seems to not only be implicated in trauma, but upregulated by chronic cocaine use and very hard to downregulate. Classic psychedelics and salvia in particular seem to be one of the few things that positively affect it. Hope you are good!
Oh I was just giving a bit of context as to who he is, I wasn't trying to comment on nootropics. I'm not opposed to supplements, so long as they're taken in consultation with your doctor and the actual scientific literature. If you find something works for you, placebo or otherwise, have fun with it. I'm happy for real research to be done in those areas.
Can't wait to get blown off the face of the Earth because some AI model hiccuped and decided I'm a threat
Lol, they also said that sort of shit about strategic bombing l, how it would end wars quicker and more efficiently l.
That sounds like something that might be possible if *one* nation used AI for warfare, but none of the rest did.
Sounds like Robert McNamara who won the metrics war but nothing else. [McNamara fallacy](https://en.wikipedia.org/wiki/McNamara_fallacy) [Streetlight effect](https://en.wikipedia.org/wiki/Streetlight_effect)

Laissez-faire Capitalism will Save Humanity ft. AI Safety.

But only if we prevent regulatory capture, the solution to which this bullet point is too narrow to contain
Upvote for Fermat reference.
I do wonder, though, how AI can make things much better than they already are. As is always advertised, automation [introduced over the past two-ish centuries] has made menial work obselete. Everything is basically free, all manual labour is done by robots, most disease is gone, everyone is happy, war is clean. If we aren't spending our time on lives of pure leisure, we're all free to concentrate on The Big Questions & improving the lot of humanity.
"The invisible hand" is now AI. Sleep well.

“AI can’t lead to crippling inequality because its owners will simply seek to maximise their profit.” Oh, ok, that’s allayed all of my fears about it.

We definitely know from history that the bourgeoisie have chosen to do this by making the means of production meaningfully accessible to the proletariat, so I think that makes the score Marc: 1; Marx: nil.

Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.


In short, anything that people do with their natural intelligence today can be done much better with AI, and we will be able to take on new challenges that have been impossible to tackle without AI, from curing all diseases to achieving interstellar travel.


… … …

And the reality, which is obvious to everyone in the Bay Area but probably not outside of it, is that “AI risk” has developed into a cult, which has suddenly emerged into the daylight of global press attention and the public conversation. This cult has pulled in not just fringe characters, but also some actual industry experts and a not small number of wealthy donors – including, until recently, Sam Bankman-Fried. And it’s developed a full panoply of cult behaviors and beliefs.

This cult is why there are a set of AI risk doomers who sound so extreme – it’s not that they actually have secret knowledge that make their extremism logical, it’s that they’ve whipped themselves into a frenzy and really are…extremely extreme.


> extremely extreme Yud learned how to skateboard??
His AI tutor argument really just sounds like an argument for how great AI girlfriends are going to be lol. "Why settle for a real woman that won't polish you off every 30 seconds when you can have a robot woman that will?"
The AI tutor argument is from [The diamond age](https://en.wikipedia.org/wiki/The_Diamond_Age), also known as 'The Diamond Age: Or, A Young Lady's Illustrated Primer', it will prob be something like the metaverse, only used by mad tech fetishists (or horny tech fetishists (looking at you second life, go do you!), or by poor people or people who get forced into using it by authority figures. Small reminded that all Neal Stephenson ideas when implemented in the real world have sucked, the metaverse, cryptocurrencies, and now AI tutors who really love and care for you. (I do like his books even if he can't write endings, they tend to be a bit long however). E: an AI tutor is also involved in the creation of [Space Hitler: Or the case for genocide](https://en.wikipedia.org/wiki/Ender%27s_Game)
Well, in the case of *Snow Crash*, the metaverse kind of sucking was the whole point. The whole idea is that the real world is so dystopian that people would rather live in an artificial fantasy, and that's a bad thing. Anybody suggesting the metaverse as a solution to problems in the real world is unintentionally saying that our problems have gotten so bad that we may as well just give up on fixing them altogether. Problem was, a generation of techies read *Snow Crash* and saw the metaverse as a blueprint for a high-tech utopia where we were no longer bound by the constraints of "meatspace".
In the plot of Diamond Age, the "AI tutor" required a human voice actor, on a real-time/live basis, to actually provide the "love and care" part. Sort of like paying a person off fiverr to read ChatGPT-generated educational stories to you, but with added emotional labour.
And here I was, thinking the AI doomers were just giving backhanded "criticism" of overhyped tech that's still designed to make it look absolutely badass and world-changing, so you should invest in it now to make sure it's "aligned" properly.
The alignment question has always been, "but can it be slave labor?" in several more words.
Freedom from choices ? The AI is your guardian angel

Has anyone done a The End of History but with AI yet?

Sounds like a guy trying to make sure his investments in AI payoff.

I got about 20% through the post before giving up and skimming the rest. It’s a lot of uncritical, doe-eyed, baseless, and flat-out ignorant speculation.

Possibly the only worthwhile part of the article is right at the start where he gives an ok statement of what AI is and what it will be for the foreseeable future. Everything else in the article is essentially everything rationalists evangelise about the potential good of AI/AGI.

His entire take really just boils down to: “we should just arrest all the bad people and stop all the crimes and everything will be ok, i promise :) Chinago delenda est.”

Does this honestly mean that computer engineers are the warriors of the future?

Have the rationalists heard about https://en.m.wikipedia.org/wiki/Kantian_ethics

Finally some representation for the pole of insanity opposite AI doomers’.

I’m the odd one out it seems. I agree with the general gist of most of Andreessen’s points.

The stuff Andreessen writes about Marx is nonsense, e.g. >As it happens, this was a central claim of Marxism, that the owners of the means of production – the bourgeoisie – would inevitably steal all societal wealth from the people who do the actual work – the proletariat. It is well known that Marx didn't think this was "stealing" and actively argued against such a conception: >The upshot is at best that the bourgeois legal conceptions of “theft” apply equally well to the “honest” gains of the bourgeois himself. On the other hand, since “theft” as a forcible violation of property presupposes the existence of property, Proudhon entangled himself in all sorts of fantasies, obscure even to himself, about true bourgeois property. https://www.marxists.org/archive/marx/works/1865/letters/65_01_24.htm More Andreessen: >The flaw in this theory is that, as the owner of a piece of technology, it’s not in your own interest to keep it to yourself – in fact the opposite, it’s in your own interest to sell it to as many customers as possible. Marx never argues anything different. Marx argues that surplus value is generated in production. e.g: >The money-owner buys everything necessary for this purpose, such as raw material, in the market, and pays for it at its full value. The consumption of labour-power is at one and the same time the production of commodities and of surplus-value. The consumption of labour-power is completed, as in the case of every other commodity, outside the limits of the market or of the sphere of circulation. Accompanied by Mr. Moneybags and by the possessor of labour-power, we therefore take leave for a time of this noisy sphere, where everything takes place on the surface and in view of all men, and follow them both into the hidden abode of production, on whose threshold there stares us in the face “No admittance except on business.” Here we shall see, not only how capital produces, but how capital is produced. We shall at last force the secret of profit making. >This sphere that we are deserting, within whose boundaries the sale and purchase of labour-power goes on, is in fact a very Eden of the innate rights of man. There alone rule Freedom, Equality, Property and Bentham. Freedom, because both buyer and seller of a commodity, say of labour-power, are constrained only by their own free will. They contract as free agents, and the agreement they come to, is but the form in which they give legal expression to their common will. Equality, because each enters into relation with the other, as with a simple owner of commodities, and they exchange equivalent for equivalent. Property, because each disposes only of what is his own. And Bentham, because each looks only to himself. The only force that brings them together and puts them in relation with each other, is the selfishness, the gain and the private interests of each. Each looks to himself only, and no one troubles himself about the rest, and just because they do so, do they all, in accordance with the pre-established harmony of things, or under the auspices of an all-shrewd providence, work together to their mutual advantage, for the common weal and in the interest of all. >On leaving this sphere of simple circulation or of exchange of commodities, which furnishes the “Free-trader Vulgaris” with his views and ideas, and with the standard by which he judges a society based on capital and wages, we think we can perceive a change in the physiognomy of our dramatis personae. He, who before was the money-owner, now strides in front as capitalist; the possessor of labour-power follows as his labourer. The one with an air of importance, smirking, intent on business; the other, timid and holding back, like one who is bringing his own hide to market and has nothing to expect but — a hiding. https://www.marxists.org/archive/marx/works/1867-c1/ch06.htm
Yeah his little thesis about inequality is straight up brain broken
Is Mark not just trying to simplify Marx to folks who haven't read Marx? And as is always the case when simplifying nuance is lost? I get that in a high school and higher level of discourse the nuance of these distinctions IS useful to explore. And at a college level SHOULD be explored. But if I was trying to ELI5 to a child would I not convert this sentence From: Legal protection of the bourgeois to control the means of production allows them to extract the surplus value created by the proletariat's labor. To: Owners steal from workers? Again I absolutely agree nuance is lost when doing so. But not everyone is interested or can grasp the nuance of Marx? If you disagree. How would you simplify the concept we're talking about to a child?
Extraction of surplus value is not "stealing". >But not everyone is interested If Andreessen is not *interested* in Marx's actual positions then he shouldn't be attempting to write about Marx. Where does "ELI5 to a child" come in? Are Andreessen's readers 5 year olds?
Point taken.
Yeah, *most* of his points are just about how chatbots won't destroy the world. After the bit about how they'll save it.
I read it and thought of that “the worst person you know just made a great point” Onion article. Andreesen is in some ways an expert opportunist, and I think he saw a nice opportunity here: be ahead of the curve and predict that AI doomerism would soon be discredited, since AI murder bots are not going to go rogue and act out the Terminator plot, which would pretty much need to happen to validate the hype at this point. He can score easy points here, so he went for it. But he’s also right.
He’s right about the AI doomers being nutters, but his pollyanna take about AI doing more good than harm makes a lot of assumptions. The worst is the classic Econ one of “material wealth will make the world better” which is obviously true if you live in abject poverty but diminishes in relevance as you (and society) get richer. At some point, material wealth stops generating returns (everyone is fed, clothed, has medical care and HBO) and things like meaning, fulfillment, mental health, social relationships — just matter a lot more. Yes, on average more technology is good, but that doesn’t mean any *particular* technology is net positive. “Cheap meth production” is on net bad for society. Since we already have a surplus of text and video content, I fail to see how “cheap text and video production” — which is what this tech is at heart — is going to help us. The AI waifu scenario seems more likely to incentivize further social alienation than to “help” those who are already isolated. Read between the lines of Marc’s solutionism here and the gist is: guess what, humans don’t need to care about each other as people anymore, we’ll just have computers fake it so that we can all go on being asshole venture capitalists to each other in real life. If you don’t like it, go sob to your chatbot about it!

I’m usually happy to be first aboard the Marc Andreesen hate train, but this might be the most sober and reasonable take on the whole robot apocalypse frenzy that I’ve seen so far.

My view is simple: I disbelieve all the doomer scenarios. The notion that throwing more compute at transformers or even 1 or 2 more model breakthrus won’t result in the kind of general super intelligence we are worried about.

We are building based on models of the human brain. The assumption that more cpu = smarter seems to be… a giant assumption. We know from human models that smarter people don’t actually seem to be made out of something different than everyone else! It’s not the clock speed but the structure.

Right now all the calls for regulation will just limit accessibility of these powerful tools to … already rich and powerful people.

Imagine if the only people that got power tools were already wealthy. Everyone else had to build their houses out of sticks. That’s the scenario that Sam is advocating for now.

Gotta stop you right there; we are not building anything based on models of the human brain. Neural network is a misnomer.
The other day on Twitter Emily Bender described them as being based on a 1950s misunderstanding of how the brain works, which I found interesting. I don’t know enough about them to say how valid that is, however.
Yeah that is a very good way to sum it up. A couple of guys wrote a paper about the barest possible sketch of a neuron (not their fault, it was early days for neuroscience) as an all-or-nothing electrical system, equivalent to a transistor. If this were the case, then the brain could be understood through regular ol' math.

The delusion will only keep increasing.