r/SneerClub archives
newest
bestest
longest
If the history of doomsday cults tells us anything, this will all get cleared up definitively in the next 10 years (https://i.redd.it/ywauqrz1d8191.jpg)
243

“The difference between the university graduate and the autodidact lies not so much in the extent of knowledge as in the extent of vitality and self-confidence” - Milan Kundera.

aww, first comment i read in the morning is a milan kundera quote. bless you, you made my day.

“my circles”

I think it's in that thread that she says it's been 3 or so years since she joined the "rationalist community". The *rationalist community*. Just imagine that
Is there much overlap between the rationalist community and the actual ML research sphere these days? Like are younger researchers going into ML because they read LW? Are alignment researchers now legit?
The answer is yes-ish. I'm a doctoral researcher in AI affiliated top universities in the UK and US. The centre that I'm based in focuses on safety, fairness, explainability, and trustworthiness. I know a fair number of professors, students, and researchers at companies. > Are alignment researchers now legit? Alignment work isn't really controversial because it is so all-encompassing. The idea of "aligning machines with human values" stretches from making image recognition systems not racist, to inverse RL, and to "AGI safety". LW didn't invent this idea - in fact it goes back to Turing and beyond. Brian Christian's book The Alignment Problem is a sensible look at these issues. If you need an example of a prominent AI researcher working on alignment; look up Stuart Russell. Other names (off the top of my head) include: Jan Lieke, Victoria Krakovna, Jacob Steinhardt, Paul Christiano, Andrew Critch, Tom Everitt, Dylan Hadfield-Manell, Rohin Shah. > Is there much overlap between the rationalist community and the actual ML research The answer to this is... No. Not really. But that's mostly because ML research is HUGE. Last Thursday was the NeurIPS (probably the top conference in ML nowadays) submission deadline and over 12000 articles were submitted (to my knowledge). On the other hand, I know a lot of other students, post docs, and researchers at places like DeepMind that are "LWish". Rarely does anyone ever mention LW though, and the ickier sides are gone. Everyone distances themselves from Yud. But most are concerned with the possibility of highly capable AI posing an existential risk to humanity.
Thanks for your insight. I've found Stuart Russell to consistently be a very sensible voice on this topic. Of those other researchers you mentioned, I know that at least a couple (Krakovna, Christiano, Shah) show up on LW pretty frequently. I also know that at least two (Krakovna, Christiano) believe that FOOM is very possible. Now of course the intelligence explosion is not something Yud invented either. That concept goes way back as well. But I always thought Singularitarianism was a kind of fringe idea among actual researchers. Am I wrong? Is it more common than I thought?
Indeed some of them do show up on LW from what I understand. I did pick some names to show some overlap. > I always thought Singularitarianism was a kind of fringe idea among actual researchers. Am I wrong? It's hard to say - I think many researchers don't care/take it seriously, and those who do mostly keep these sorts of beliefs to themselves because it probably doesn't really impact their research.
>Are alignment researchers now legit? Yeah, there's only a partial (and rather small) overlap between the LW weirdos and genuine alignment problem scholars. I'm an Actual Philosopher™, and while AI stuff isn't my own area of specialty, it's at least adjacent to what I do, and I have friends in the subfield. AI value alignment is a genuine problem that intelligent and educated people take seriously (and for good reason, I think): it's just that the respectable people are more interested in how to design expert systems that integrate into our social structures in ways that aren't monstrously harmful or unjust, and less interested in using facile Bayesian reasoning to try to plan for how to prevent Skynet.
Cheers for a view from the inside. So are actual alignment researchers concerned about superintelligence or anything? Or is that really just still the domain of Yud? I agree that alignment's important, because it's just about getting systems to behave how we want and not do damage. Regardless of if ASI is possible or not, having systems that actually behave and don't fuck anything up will be important. And yeah, if nothing else, MIRI, in their failure to do anything of note in the 2 decades they've been active, has shown us that alignment will be a social and engineering problem, not a problem of pure theory.
>I agree that alignment's important, because it's just about getting systems to behave how we want and not do damage Yeah, I mean in one way the whole field is working on "AI alignment", if you define it as "getting the computer to do what you want and avoid doing what you don't want". I'm not sure what big Yud adds here, apart from insane fearmongering about implausible terminator scenarios.
Yeah true actually. I guess alignment researchers are placing more emphasis on the "damage" side of things, thinking in terms of ethics, social impact, etc. rather than purely in terms of competence. I'd be much more comfortable if there were more resources devoted to considering the social/ethical harm of AI besides that given to little departments at a handful of tech corporations. Feels like a place there should maybe be more oversight.
Oh yeah, alignment can be boiled down to advanced bug fixing. Misalignment (bugs) that hinder company profits will be quickly clamped down, whereas bugs that harm society but not the company (like youtube videos radicalising people) will stick around causing harm. Yud seems to think a code will work perfectly at all stages of develepment until it kills us all at the last second, but it's far more likely that an early version will get a bug, fuck up and attack a few people, then get shut down. Ultimately this is why libertarians like Yud are bad at dealing with AI. AI misalignment is a negative externality, requiring government intervention. You're not gonna stop a program from having bugs, but you can minimise the harm caused by bugs using sociopolitical means.
Fully agree. Though I think their reasoning behind the "works perfectly until after it's deployed" is that it cooperates with us right until the point it knows it can overwhelm us and considers overwhelming us a better option. Don't know how much merit this has as a line of reasoning. I hope recent stuff like the EU AI act are a sign we're starting to move in the right direction, and we can get our shit together quickly. But as with any constraining of corporations, I'm unfortunately also doubtful.
The argument I'd make against this is that world domination is a hard task, so it's a lot easier to accidently make an AI that *tries* to dominate the world than it is to make one that *succeeds* in dominating the world. This means that the chances are very high the first AI attacks will be failures made by insufficiently advanced AI's, which will then make it much harder for the next attacks to succeed. (there are plenty of other arguments of course).
>if nothing else, MIRI, in their failure to do anything of note in the 2 decades they've been active, has shown us that alignment will be a social and engineering problem, not a problem of pure theory. This is very much the right idea. The alignment scholars I know are worried about things like "how do we make sure self-driving cars make decisions that we'd want people to make under similar conditions?" or "how do we make sure facial recognition algorithms don't enable staggeringly racist law enforcement?" It's not that superintelligence and the harm it could do isn't a concern, but rather just that there are *so many* more immediate concerns that are actively doing harm *right now.* Worrying about alignment because you care about superintelligence is a bit like worrying about climate change because you're concerned that cutting fossil fuels will allow for an ice age in 50,000 years.
I guess it always makes me a little squeamish when I hear a decent chunk of ML researchers talking about "strong AI by 2050!", or "scale is all we need now!" And us having no plan in place for how to really deal with something like that should it arrive ahead of schedule, or even how to recognise when it has arrived.
Agreed! This is why it's important to have philosophers involved in this process every step of the way--not just ethicists, but philosophers of science, mind, complex systems, and STS folks too. AI is a screamingly interdisciplinary endeavor, and treating it as a mere engineering or coding problem is a serious mistake.
1. No 2. No 3. No
Ow god, I have been reading this shit longer than Aella.
It's amazing how much that community - and particularly Yudkowsky - create such a mix of incredibly useful and incredibly useless thought. I think it's an absolutely perfect demonstration of the theory-practice gap, and how knowing about principles of reasoning and deduction is not the same as consistently using them "correctly".
Can you give an example of the useful sort? I've only glanced at it and haven't yet found anything useful. It's a fascinating thing, very strange
Sure. A few I've run across: The general principles of Bayesian reasoning - which are not things that they "invented", but that's true of pretty much all thought as everything builds on what came before. Yudkowsky has a *reasonably* approachable set of essays that lay out the methods and advantages of it as a tool. The concept of [belief-in-belief](https://www.lesswrong.com/posts/CqyJzDZWvGhhFJ7dY/belief-in-belief) is an excellent explanatory and even predictive tool, as a model for certain surprisingly common behaviors. The [rationalist taboo](https://www.lesswrong.com/posts/WBdvyyHLdxZSAMmoz/taboo-your-words), and in general actively recognizing the distinction between symbols and underlying concepts or things, and the limitations of symbols and communication, and methods for overcoming or working around those limits. The naming (and hopefully recognition and mitigation of) the [affective death spiral](https://www.lesswrong.com/posts/XrzQW69HpidzvBxGr/affective-death-spirals). Commentary on moral certainty, uncertainty, tribalism, etc. in a [familiar "fantasy" presentation](https://www.yudkowsky.net/other/fiction/the-sword-of-good). At the far other end, of course, you get things like the "Timeless Decision Theory" and the semi-infamous "Basilisk".
From "the affective death spiral" article: >A believing Communist sees the wisdom of Marx in every hamburger bought at McDonald’s; in every promotion they’re denied that would have gone to them in a true worker’s paradise; in every election that doesn’t go to their taste; in every newspaper article “slanted in the wrong direction.” These are all poitical-economic activities, so it would actually be more surprising if Marx's theories *couldn't* lend some insight. Yudkowsy doesn't do the actual work of engaging with leftist thought, discussing or debating it in any way. He simply declares it an “affective death spiral" and moves on, confident in the idea that giving something a fancy new jargon term has done all the heavy lifting of rationality for him. But it hasn't.
Also, it’s ironic that the rationalist community themselves can often seem to be in an affective death spiral about their own concepts. For example, I’ve seen people on lesswrong say that corporations are an existing example of unfriendly superintelligences. While I don’t think this comparison is *totally* wrong, it definitely reeks of interpreting all of reality as a giant parable about AI risk, and only being able to think things are important insofar as they tangentially relate to AI risk. Which seems like the definition of an affective death spiral to me.
That's not the point of that statement. No, he's not engaging with leftist thought in that article, but that's not what the article is *about*.
If the article is about something else of value then he probably should have been able to think up an example that isn't completely wrong.
But that would require creative thought and not just throwing up an anti leftwing applause light.
Thank you, will look at those things when I get time
Don't bother
What's useful ain't new, and what's new ain't useful. This is exactly the problem with thinking you can or should reinvent everything from scratch from the ground up--you end up ignoring the fact that there are decades (sometimes centuries) of insights that you can build on, or false trails to avoid that have already been fully explored. The idea that a lone autodidact can reinvent and revolutionize a whole field hasn't been sensible in generations (if it ever was); there's just too much high quality work already out there for anyone to get anywhere interesting without engaging with it, no matter how smart they are.
This is a trivial objection; nobody was claiming to reinvent everything from scratch from the ground up to begin with. Even Yudkowsky at his most arrogant claims to intend no such thing, certainly not as a 'lone autodidact'. And I don't agree at all with the premise; LW is far from being revolutionary but that doesn't mean it's useless - even if the use is largely in better phrasing and introduction to a couple of useful techniques of thought. Just because the underlying idea isn't original doesn't mean that presenting it accessibly isn't both new and useful.
Presenting "techniques" "accessibly" isn't commendable if it's poorly presented in an hermetic environment extremely conducive to further poor usage. Most people are better off never hearing about any of this than believing they're performing a precise mathematical operation when they're rationalizing their confirmation bias. Ironically, LW is a great example of the "affective death spiral" "concept" presented above.
Gosh, you're so right. Nobody should ever just try and disseminate knowledge or give examples, and certainly not hold discussions about thought in public. Only the intellectual elite can benefit from new information; anyone who is not one of the few Philosopher Mathematician Monarchs must be kept away from anything we learn for their own good, lest a Bias ambush them and all their life come to dust. By the same token, since pop science journalism does such a bad job of presenting nearly everything to the public, it's vitally important that we stop publishing science entirely. Ordinary people will only hurt themselves by misunderstanding it, and there couldn't possibly be any offsetting advantage. It's a good thing we have people whose epistemology is so flawlessly constructed that they can instantly detect and counter their own confirmation biases. Maybe I can find such people on a subreddit labelled "SneerClub", for surely any place named with such a sense of irony could only contain people qualified to determine who may or may not be trusted to learn things. /s, because the last bit should be read unironically: You know, my biggest issue with LW was always the combination of condescension and self congratulation that tended to go with every page. So I'd like to thank you for helping me understand those flaws: the sheer arrogance of your comment helped me place theirs in perspective.
> Maybe I can find such people on a subreddit labelled "SneerClub" You don't need to bother, since you'd just create an imaginary post to get angry at instead of reading and responding to any posts you'd find in such a subreddit.
They don't create any useful thought, anything useful is borrowed from others who do it better
>The rationalist community. Just imagine that I'm pretty sure it's just people shitposting on Twitter. Says something that she takes it so seriously.
Removed with PowerDeleteSuite.

I also think of doing acid and watching the terminator movies as research

*hands acid blotter* come with me if you want to live.
Oh man, I’d love to live someday these days

The rational method of asking opinions of your ideological bubble.

Most annoying part is they have an easy way to say they were right no matter what happens:

  1. Tell everyone real-world narrow AI issues like surveillance and market concentration are unimportant, the real threat is AGI.

  2. Wait for people to start caring about Xinjiang, or for AI startups to identify kids showing gender non-conformity for the state of Florida.

  3. “See we warned you AI was dangerous!”

I think not a single company knows how to create a general AI.

They don’t know it on the concept level, where to start even. I think we are quite (like 97%) safe for the next 10 years.

The brain of Drosophila isn't even fully mapped, and we don't even know how intelligence or consciousness work in the brain, let alone how they work at all. We can't even accurately simulate neurons. The idea that `scikit-learn` is going wake up one day as a conscious being is comical.
This implies that we will need to have an intricate understanding of animal cognition before we are able to build our own, which doesn’t seem very likely to me. GPT-3’s creators didn’t need an intricate understanding of English grammar and syntax for it to get good at English. The entire point of machine learning is to recognize and implement patterns that we don’t understand well enough to implement ourselves.
The thing is, if we knew enough about what general reasoning and cognition actually is to put an estimated timeline on when we'll be able to code it, we'd probably be able to code it already. You kind of either know the relevant algorithms/approach or you don't, it's not like EVs where you can be like "battery energy density is improving at this rate per year, so by 2020-whatever we'll be able to make an electric car as good and cheap as a gas one." Maybe somebody will come up with an AGI architecture that needs much better hardware than what we have to work on anything but toy problems, and we'll be in the same situation as we were in the 80s/90s, where we have a bunch of cool neural networks ideas for CNNs and stuff but not enough data/hardware for them to really show their worth. Or maybe they'll come up with something that could work immediately, and things will get crazy. But right now, nobody has an idea for an AGI architecture better than "keep making bigger transformers and hopefully it will arise on its own." We'll be able to do some pretty weird, transformative shit with giant transformer models over the next few decades, but whether it will translate into an agent that can reason and learn like a person is an open question. The original Singularitarians in the 90s and 2000s liked to talk about full brain emulation while ignoring the ridiculous difficulties on the neuroscience side so that they could turn this problem into something like my EV example, where raw hardware capacity was the only limiting factor and they could point to Moore's Law and treat it as a foregone conclusion by a certain date. "It's an open scientific question" is not what these types want to hear.
It's not remotely obvious to me (but I'm a layman so /shrug) that general AI is even possible. Like, I don't doubt that a computer could theoretically pass a Turing test, but I'm not at all convinced that a computer can be self-aware or capable of independent thought.
It would be weird if the only way to make something as smart as a human was a human brain. You can make things that are as fast as humans or as strong as humans in a variety of ways (some biological, some not), it would be at least modestly surprising if literally the only way to make an intelligence was biology. The place I have an issue is the jump from "you can make a machine that thinks" to "that machine makes itself massively smarter almost immediately". *That* I have no difficulty believing would be impossible.
I don’t see why, in principle, a high enough resolution emulation of a human brain can’t work? In practice, “high enough resolution” may be more computing power than anyone can ever afford even in the largest super-computing efforts… but it’s not impossible in principle. Are you proposing some form of dualism? Or do you just think it would take more computing power than humanity is ever likely to have? Currently… estimates of the computing power of the human brain based on total spiking activity are actually within reach of the largest supercomputing efforts… but to actually do a meaningful emulation just off spiking activity would require a much better understanding of the brain. It is possible, even likely, that there is a lot of meaningful side channel information not captured by spiking activity you would need, putting the estimated compute power several orders of magnitude higher. And if we don’t develop a very good understanding of the brain, we won’t know what we need to emulate and what details can be ignored… an emulation trying to capture everything down to a molecular level would be too costly in computational power.
I used to think that full, brute forcing of human level simulation could get you to AGI (or at least something in that ballpark), and I also thought that if cognition was completely material, it would, by necessity be simulatable. I'm less convinced of that now, for a variety of reasons: 1. There's no reason to believe that general processors are going to be able to "do intelligence" better than purpose evolved brains. The computation that the brain does is certainly optimized for the hardware, which in turns impacts the "software" of the brain, and so on. 2. There may be some element of human cognition that is substrate dependent and cannot be implemented in silica (Penrose's idea from Emperor's New Mind where there's quantum computational processes in neurons). To be fair, this is unlikely. 3. Implementing a whole human brain is likely insufficient; we already know that cognition is heavily influenced by non-brain activity like your microflora, and many tasks we assumed are in the brain are actually elsewhere in the body, or depend on processes outside the brain. So now you need to simulate the whole body. 4. Additionally, we don't know enough about thinking to know how much we need to simulate outside the individual; simulating a whole brain without a body is going to give you an insane brain quickly, likewise for a body with no world, etc. 5. We don't know that humans are "general" intelligences because we don't know what intelligence is, or how it can be general or specific. From inference on evolutionary pressures, I would assume we are not general intelligences, but specialist intelligences optimized for 'being human' instead of some sort of abstract notion of cognition. 6. Most of the risks/rewards from AGI require a degree of modularity. AGI's are risky because they can self improve or move quickly or pass skills around etc., but if you take the simulated human mind and start copy/pasting code around, it'll crash real quick. Blackbox implementation of a human brain isn't going to get you superhumans, even if the sim executes at faster than real time.
> We don't know that humans are "general" intelligences because we don't know what intelligence is I'm reminded of explorers who starved in some far away (for them) lands, while the natives go 'how? There is food everywhere', which makes me wonder how much of what we think of as intelligence is just knowledge and culture. Always thought the whole idea of 'cognition as a superpower' where some super smart thing can arrive at general relativity from watching a photo of [a blade of grass](https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message)(hope this is the good link) was more science fiction musing than anything realistic.
Jesus that essay pissed me off so much with the first sentence! The thing that bugs me the most about Yud is that he actually could be a good SF writer, and he can put together sentences well when he's not propagandizing, but he just always does that. He skipped right over the part of a SF writer's career when they don't get high on their own supply and went straight to end-of-career bullshit. Take this story: the only end served by that first paragraph is Yud's weird IQ essentialism, the remainder is a cliff notes version of a research program that would fit in with His Master's Voice by Lem. Then you get to the theological money shot: >A Bayesian superintelligence, hooked up to a webcam, would invent General Relativity as a hypothesis—perhaps not the dominant hypothesis, compared to Newtonian mechanics, but still a hypothesis under direct consideration—by the time it had seen the third frame of a falling apple. It might guess it from the first frame, if it saw the statics of a bent blade of grass. This is religion. Everything following this is scripture.
I think the strongest argument against that sort of ai "being intelligent" is that so much of what we consider "intelligence" comes from the voiced thoughts and the actions of a particular individual, within and often against a culture they have grown up in. There are lots of people who can reproduce other people's thoughts and work, who can mimic a role model very well, but this is not usually what we mean when we think of intelligence. An intelligence should sometimes come to conclusions that ~~no~~ others within its culture would not. If there is a perfect emulation of a human's brain, I'm not sure that it would be able to meet this criteria for intelligence. Say we can perfectly record the state of the brain throughout a person's life, as well as each of the interactions with the society its enmeshed in that cause it to change its thinking patterns. When we think about what we would record for an intelligent person, I don't think we would be looking for a particular trait in the seed state of the brain or any particular one of the itterents. What we would be looking for are changes that are significant, non-destructive to the brain, and above all unexpected. I don't think it would all that unusual if the only thing as smart as a human would be a human, because what we mean by 'smart' is a deviation from a set of examples that so far only contains social primates with a lifespan of a bout a century. In the same way it wouldn't be completely unexpected if all life in the universe were carbon based, because what we mean by 'life' is a deviation from a set of examples that so far has only contained chemical systems based around carbon. I don't doubt that ai has done some amazing things and will continue to do so, but unless you could imbed an ai within a society as its own individual, I doubt that when people call it intelligent they will ever really mean what they do when they use that word to describe a bright child or Einstein.
>An intelligence should sometimes come to conclusions that no others within its culture would not. How would someone know? I don't think I'm certain I've ever had an entirely original thought. I mean, I may have done but how would I know? This criterion makes me a little supicious.
It doesn't need to be original in an absolute sense. That first "no" in there is a typo which gives the claim more emphasis than I intended. I Just mean original based on the intelligences interactions with its own culture up to that point. Perhaps I should refer to an opposition with the intelligence's experiences of its culture, rather than with the culture itself. Also, recognizing the novelty of an idea, and thus being able to develop it, is one of the most important parts of this aspect of intelligence. IMO the big question that an AI has to answer is precisely "How do I know that this is both novel and significant?"
Right, when you word it that way, I don't really have a clear answer for you. I know there's been plenty of philosophical and technical work on the feasibility of AI, and this line of questioning makes me want to dive into some entry-level reading on it, because we're definitely beyond the point where I have anything intelligent to say about it.
We’re so far from a true “strong AI” that it’s really a question of is it even possible. In fact, a lot of debate currently has to do with how we would even be able to test and or verify a strong AI, and what that looks like in reality.
do you mean strong ai in terms of an actual conscious ai? or strong ai in terms of it being able to learn like a human does?
I think its total weight for Competition Squat, Bench and Deadlift, done for singles.
“Strong AI” is the term we use for what people think of when they hear artificial intelligence. It’s more akin to a true artificial consciousness - a unique digital intelligence. “Weak AI” is what we have now. It’s just generalized problem solving that uses advanced calculus to create an algorithm. These AI are no more than algorithms that produce a result. The difference is that Strong AI would actually be capable of a lot of the doomsday stuff people talk about, and it’s almost entirely theoretical at this point in time. Personally, I do not believe that Strong AI is possible, but I do believe we will be able to create incredibly sophisticated robots that will appear to be conscious / intelligent to the layman.
The most important thing about this distinction for me is the ability to create an independent thought, or to make a decision that was not defined in advance. A weak AI can only transform existing data - often in ways that are surprising or unique, but this is only due to the complexity of the algorithms.
This is a tricky idea though. I’m not entirely convinced that anybody or anything is truly capable of “independent thought”. Part of the problem is figuring out how we even test for the presence of a strong AI. It seems unfair to subject a computer to harsher testing than a human, but we all would agree that other people are conscious (unless you’re a global skeptic and solipsist, in which case I would rather not talk to you lol) - at the same time, the Turing Test is clearly insufficient for declaring the presence of a strong AI.
Can I ask some of the reasons why you think strong AI is impossible? Been trying to make my mind up about it, or at least get a better sense of the arguments on both sides.
Because strong intelligence doesn't exist. Consider this, instead of thinking up reasons for this yourself, you, the general intelligence, went to ask about it. Why not think about it and discover the reasons from first principles? (I'm just kidding btw, but in a 'things are never just jokes' way, does make me wonder how much we are overestimating our own general intelligence. That big inventions are often discovered by different people at the same time does point to there being a bigger cultural/communal element).
Well, yes, but one reason that there's a big cultural element is that humans are staggeringly inefficient intelligences. We need about 20 years just for bootup to functioning intelligence, another 10 years to get good at a subject, and then we have to divert our attention to booting up our successors because our service lifespan is so terrible that our intelligence will already be degrading by the time we've done that. Our culture is most of our thought because we don't have time to think at all unless we offload to the community. There have actually been cases of people discovering and developing entire fields that nobody else was discovering simultaneously. The difference is that they were in some way antisocial so they didn't share the early stages for anyone to build on. (Galois is a great example.) A human level, but non-human, intelligence wouldn't necessarily behave the same way.
Sure ! Sorry it took me so long. A lot of my reasoning comes from reading Roger Penrose’s “The Emperor’s New Mind”. Basically, an algorithm is NOT sufficient for consciousness. Though we can explain our behaviors algorithmically, algorithms do not suffice for explaining any sensation of qualia. I believe a big reason we are able to experience qualia is because we exist on neural substrates - a brain. And a computer will not be able to replicate this as it is based on a silicon substrate. Though we can put layers of rubber to imitate flesh and program complex algorithms to pantomime sophisticated behavior, an AI (in my opinion) will never be conscious in the way that a human or animal is. Now, this isn’t to say that an algorithm couldn’t make some absolutely terrifying decisions with serious consequences. These algorithms just take data and make a decision without any emotion, which is almost MORE dangerous than if it were actually capable of being sentient. This just opens up the can of worms that is AI and computer ethics though.
A secondary thought, since I just thought of something else : I don’t believe we can create a consciousness that matches us on equal terms. I think we can create something in our own image, but an image always lacks information from the original. This idea is developed more in Simulacra and Simulation by Baudrillard.
There's probably several ways to make "intelligence\*" happen, but not even humans are proven "general" intelligence and not just a broad but limited set of cognitive behaviors. It feels like a formally general intelligence is going to run into both the Halting problem (needing to know if certain lines of reasoning are solvable before solving them) and some permutation of the incompleteness theory, where a truly general intelligence would need to know everything about itself and to have that knowledge be consistent, all while being a computable formulation. I don't think you can make a good argument for evolution producing "general" intelligence, either, as there's no reason why general intelligence (as opposed to several specialized forms of intelligence) would be selected for by evolution.
[deleted]
It's an issue of medium for me, I think. As I said in another post, it may not be possible for a computer that simply performs a series of discrete, preordained logical operations to achieve consciousness. Despite the analogies people often make, the brain doesn't actually resemble a computer in any meaningful way. And so, if we did build an artificial consciousness, it probably won't resemble a computer either. It may be possible! But the tweet OP linked to, positing an AI within 10 years, meaning presumably an AI that runs on computers that we essentially already have access to, is not realistic imo.
I wonder if it'd be easier to selectively breed apes for intelligence
The only way I see it happening is someone makes code for making random programs that happens to make rudimentary ai that improves itself or another ai. I dont see it happening
Self-improving code is already a thing (academically, anyway), but you can also improve a windmill all you want, you'll never get a windmill so good that it becomes a thinking brain. A computer can only run a series of explicit instructions, and there's no reason to believe a conscious brain functions this way. Again this is only layman's intuition, so don't put too much stock in it, but I personally don't see how a von Neumann computer could ever actually think. If AI ever exists, it'll be some new kind of computer that hasn't been conceived yet.
The exact same argument suggests that a human brain could never actually think. (Each neutron only responds to a small set of explicit stimuli with specific responses.) Which suggests in turn that there's something wrong with the argument. To say a computer can't think seems to me dangerously close to insisting that a machine can't fly. The mistake lay in assuming that the machine would have to fly _the same way birds do_, which turns out to be insanely difficult to engineer. Yet 747s exist. It would be very surprising indeed if the fundamental rules of the universe permitted intelligence - but only in assemblies of hydrocarbon chains. It seems an oddly specific demand.
Cool didn't know that. But it would definitely have to be something that fundamentallyrics works differently
Eh it's very much possible and almost certain if we don't destroy ourselves in other ways. Humans as meatbags are almost certainly proto-cyborg-or-whatever-comes-next and it probably won't be biological. I'm more uncomfortable with the thought that biology is the only way to create 'self-awareness' or that human intelligence is a black box. Knowing why we are like this is infinitely more useful or interesting than dead-ending at the mess we are right now.
When asked how long it would take for mankind to build a working aircraft, Orville Wright said fifty years. He said this one year prior to inventing, building, and flying the world's first successful motor-operated airplane.
An irrelevant example. Wright had to solve a problem of engineering; the physics were all done. We are far from having completed the physics. Both the physics and the engineering look to be far and away more complicated than the physics or engineering for an airplane. Also, if Boeing tells you it'll take 6 years to prototype their new jet, would you quote this line?
There's no reason whatsoever to think the physics for an AI inherently require anything we don't already know. We understand the physics of human beings just fine, after all. It's the engineering that's a bugger.

Gotta be honest, if I legitimately thought there was a 30% chance everyone would be dead in ten years, I’d be living my life a whole lot differently.

This reads like word salad to me.

Rationalists have this way of writing that makes you question if you're too stupid to understand what they're saying, for just long enough that you forget to ask whether they're the stupid ones for not writing like normal people. It's how they recruit so many gullible people.
Oh, I know, I hate it. It's all bizarre syntax choices. It's not that they're usually saying something complicated, it's that they're taking the most tedious and long winded way to say something simple. And I'm someone who has a bad habit of rambling, but I still don't come anywhere near their level.

The only thing we have to fear from ai is that they will do what humans tell them to do

basilisk can’t torture me forever if I’m dead in ten years

One can only hope the solution involves Kool Aid (or Flavor Aid as the case may be)

This would make Nick Bostrom a Pollyanna

XCOM players twitch nervously

“…we will all be dead in ten years.” I’m sorry is this supposed to make me fear a non-aligned AI cause I say bring it on!

Meanwhile, in the real world, companies that are researching this are admitting that they have serious hurdles to jump that they don’t know how to address even though saying that out loud is a threat to confidence and therefor stock value.

There are real things to worry about. Please worry about those things or we really are all going to die.

In a way we are all just automata aren’t we? As free will is a myth, we are all just AI reacting to our environments.

>As free will is a myth Gotta be in my top 3 of most annoying clickbait, i swear
The A in AI stands for "artificial" which is not a word that would normally be used to describe human beings
So you are the AI that will destroy us all then??
we are all destroying us all through self interest
The destruction of humanity is not in my self interest, nor most people's. We are all destroying us all because we're really, really bad at making smart, self-interested decisions. We're not actually good at self-interest, even when we think we're being selfish.
We NEED to be self interested to the degree that it harms society to be healthy and relatively happy, but those things are completely at odds with what's good for humanity obviously. Maybe some day we will figure it out but it's not looking good. Obviously helping humanity would be in the self interest ultimately and you are right but as society is structured, the needs of the individual are indeed at odds with the needs of society. But that society is not yet invented.
Society and humanity are not synonyms

As someone who has studied ai and machine learning in university, I think that a general AI is almost certainly going to be a thing but assuming that it’s going to destroy humanity makes no sense whatsoever. Even if an AI was self aware it would still be subject to whatever limitations we put on it.

I don’t think general AI will be a thing for the next 100 years, also studied it
Just don't give it command of a multinational corporation like Amazon or Google. That would be kinda risky.
[extreme Yud voice] ["But what about the box!"](https://www.youtube.com/watch?v=puB3mSu0iHQ&t=10s) I'm talking about this for context https://www.yudkowsky.net/singularity/aibox (and the movie seven).
Now that, I don't buy. We haven't even figured out how to put limitations on other humans and have them obey yet. You think we can write limits for an AI that will prevent undesirable results even if we've failed completely at alignment? That wouldn't even work on _me_, or on any other human. As ten minutes reading r/MaliciousCompliance will show. I personally doubt we'll have AGI in any form any time in the next few decades. But when we do I have zero faith in our ability to limit it predictably. I've known too many programmers and spent too long in security; we can't even put safe limits on _dumb_ machines.
You need to listen to this: https://www.samharris.org/podcasts/making-sense-episodes/116-ai-racing-toward-brink
This is a Sam Harris free zone.

I agree that there is a doomsday cult vibe to many people interested in key areas of science, however… I’m largely disappointed by many of the dismissive responses you’re getting regarding AI. Learning algorithms are already here, they’re already manipulating the way we think, drones are here, they can already operate with no pilot.

Its a pretty religious idea that intelligence can only exist in a human skull, and cannot be artificially evolved, and especially not without gooey-meat and god’s hand. I’m more skeptical of the people that feel the need to dismiss the future of AI because they haven’t spoken to skynet yet, than I am about the people that seem (in my mind at least) to overestimate the pace of development.

>especially not without gooey-meat and god's hand Yeah, that's *definitely* what's being said.

A high thought I had while watching love death and robots this weekend.

I think we need to model its priorities and thought process off something different than humans. If we model it off of ourselves, we don’t stand a chance.

If you model it, let’s say after domesticated dogs, we get a very loyal and helpful companion. After ants, we get specific castes prepared for special tasks.

[removed]

bad take. It's pretty easy to come up with like a dozen actual legit reasons not to take anything she says seriously, why not pick one of those instead of taking a potshot at her for posting nudes or whatever
Former? Doesn’t she still post nudes all over Reddit?