r/SneerClub archives
newest
bestest
longest
Scott refutes the accusation that he is a doomsday cultist, but... (https://postimg.cc/qtGff4TP)
36

Think about how sad this is: these folks have convinced themselves that they can build God and all that they’re going to get is a chatbot. This is them settling for a lookup table as their diety.

Okay, someone please correct me if i’m wrong, but isn’t GPT3 and so on still, essentially, just a function call?

Like, it takes in an input, converts it to a bunch of numbers, does linear algebra to them to spit out another bunch of numbers, then converts it to a readable output.

Where exactly does the “break out of the box and kill everyone” happen? It’s literally just solving math equations!

And if GPT3 is perfectly safe (at least from world domination), isn’t it’s success evidence against AI x-risk, in proving that you can achieve powerful results with “safe” architecture?

You are 100% correct. The follow-up question is, "why the hell do these people think that a language generation model is the harbinger of the eschaton?" and Scott answered it: because they're in a doomsday cult.
[deleted]
Another thing to note is that a plenty of AI researchers are doing something rather disingenuous. On one hand they know full well that they need actual human-made text for training, and that they need to avoid introducing AI generated stuff into the training dataset. On the other hand they are perfectly happy with the whole mystery of whether it is parroting the input dataset or not, even though they know it is parroting the input dataset, as a basic practical fact that they use in a very practical way any time any of them gets the idea to try to augment input dataset with AI's own outputs. In the image generation world, plenty of AI researchers even outright describe it as performing a (lossy) compression of the input dataset, and there's a plenty of work on video or volumetric data compression using the same architecture. And yet when blurbs come out, it's suddenly doing something creative and totally nothing like money laundering for copyright. Apparently, when you infringe *everyone*'s copyright, you're not infringing anyone's copyright. (Note by the way that you *can* augment training datasets, e.g. with rotated images; it's just that rotating an image isn't a generating-bullshit operation). If they genuinely thought it was doing something intelligent, then they'd keep trying to train it on its own outputs, hoping that it would prove novel theorems or otherwise make some kind of progress (even if subject to linguistic or other cultural drift). But *everyone* in the field knows that this is simply not how it works. Or for image generating AI, obviously if it actually was replacing artists, then you could get entire novel art styles by training it on photos, some set of public domain artworks, and its own output. Photos would provide similar connection to real world that artists have. But again, everyone knows that this is just not how it works.
[deleted]
It's a very lossy compression, of course. It is literally the case that the same models, used on fewer images, work as perfectly usable image compression. Just look up "autoencoder image compression" if you don't believe me. The overfitting argument is frankly ridiculous - some images are present in the training dataset a very large number of times, so it starts to overfit those images far sooner than it starts to overfit obscure images (if it ever does). If you ask stablediffusion for an image of the "iron throne", it will produce pictures of the iron throne from the game of thrones, albeit with variation in the number and exact placement of swords. I did that, and all I get were images that even having looked at the original iron throne minutes earlier, I wouldn't be able to tell that they weren't the original, without looking at the original again and counting individual swords. But you can say, there are many terabytes of input images! It can't possibly represent all of them in the neural network, it is too small! Yeah, but there are many near copies of valuable IP, like the iron throne, and so during training, iron throne gets represented with adequate accuracy - enough to where from memory a typical person would not know which one is the original. If you over-represent some image in the input, it will in fact be stored as a compressed representation - the more you over-represent it, the bigger "part" of 4gb will be devoted to that image, the better will be the quality. Obscure images that nobody ever bothered to copy, it won't remember. Now, there's fair use and transformative use and all that, but it has more to do with the context of how a copy is used (e.g. parody, commentary, etc) and not with visually irrelevant changes to the number of blades. Copyright law predates perfect automatic copying, and there's a plenty of precedent having to do with Disney suing people for copying their particular mouse and other IP - entirely by hand, and at times, from memory. edit: also, curiously, no matter what you do you can't get StableDiffusion to spit out an iron throne made of Kalashnikov rifles (or at least, couldn't last I tried). If a well known artist made a popular iron throne made of Kalashnikov rifles, though, then it would be able to spit it out, and then AI art lovers would point at it and talk about how it could creatively change swords into rifles.
[deleted]
I think you fundamentally misunderstand what "overfitting" is. "Overfitting" is a name for what happens when the loss metric on the validation set ceases its decline and begins to grow. Nothing fundamentally different starts to happen at that point! There's no switch to a different algorithm. It doesn't suddenly start learning something it wasn't learning before "overfitting". It always learns both the high level details and specifics. If the validation dataset contains same popular culture items as the training dataset, the concept of overfitting doesn't even make sense, since there is no penalty for representing specifics for those popular culture items. In short, overfitting is not some separate process different from normal training.
[deleted]
I think two issues here is that the input dataset has a lot of copies of popular art and very few copies of unpopular art, so it effectively trains for more epochs on the popular art. For the iron throne, you get extreme overfitting - there are many styles of thrones and many interpretations of what “iron” would be, and it just fails to capture the variation that it might capture if the training set was somehow purged of excessive copies. On the other hand, for human hands, you want it to capture the number of fingers etc. You don’t want to generalize it to fleshy appendages that could potentially be grown in presence of teratogens. Basically what is extreme overfitting for a pop culture reference is “too generalized” for many aspects of human and animal anatomy.
[deleted]
> Its kinda fine bc a lot of people also probs want to be able to make the game of thrones iron throne, and we aren't getting exact screengrabs out. But we are getting copyright-infringing images out, well, at least as per past precedent. Obviously if you had to feed a process a gazillion images of the original thing to get it this close, it isn't an independent recreation. Is it modified enough to not be a "reproduction of the original work"? (Note that the law does not say a "copy", it says a "reproduction") How accurate do you think a reproduction has to be? What's the similarity metric? Hint: the law existed far longer than computers and digital image processing or for that matter digital copies. You could have a reasonable person study the original iron throne for a reasonable length of time, then a reasonable length of time later be presented with the original and the copy, and be unable to tell which one is the original. So it is, of course, a reproduction of the original, and a very accurate one to boot, slight variation in blade placement non-withstanding. It is based on the original, and it can substitute for the original, therefore it undermines the author's ability to get paid for the original, yadda yadda, if you look at what the copyright law is for, the purpose of it, and try to apply that purpose, obviously it's a "reproduction". The legal reason Google gets away with the image search is that their use is "transformative". Not in the sense of changing the images, no. In the sense of not substituting for the original's use. So maybe the AI itself could be deemed transformative, but its outputs are infringing if used in a certain way? That does seem sensible, but: Someone's going to train a similar neural network to predict a frame of an entire TV series from season# episode# frame# and publish a torrent. Just because this way they could compress it into a smaller space. There's vested interest in banning such torrents themselves as infringing. (Even though theoretically you could come up with some kind of "transformative" use like querying it for numbers outside the original frame range.) edit: also re defining overfitting, I think the way AI developers would actually define overfitting, would be to compute the loss metric (the same one that is minimized on the training dataset) on the validation set, and call it overfitting when that starts growing. You can't really use human ratings here, it would be too expensive. edit: also there's the question which way the money of large corporations will sway the law. I kind of doubt they're going to just use random images off the internet to make future movies. Too much risk that it would very closely match someone's art, and they can't claim independent creation. They may make some sort of clean-copyright AIs trained on their own imagery as well as public domain works, and there could be a lot of money in that, and if that's the case then they wont want competition from those who just freeload off all the art on the internet. I think ultimately the law will deem AI's outputs a derived work from the training dataset - when the "AI" is no longer new and shiny, it's just a purely mechanical process. This is also of course the right thing to do if you want to allow a market for creating better training datasets. edit: also, sidenote, I need to update my SD... I'm expecting that if it got better at fingers it also got better at matching the exact number of blades in the iron throne.
[deleted]
The point is that it would need some artist to make an iron throne out of kalashnikovs, for AI art enthusiasts to tout this as an example of AI’s creativity. Also, just ask stablediffusion for the iron throne. I’m on my phone and its late so I can’t upload you a few pics. The result is really frigging close to the original from the TV series. Far closer than anything would be if there wasnt many thousands of images of the iron throne in the training dataset. Overfitting? If the iron throne from the series is also in the validation set (perhaps a slightly different view), even an exact recreation of every sword would not constitute “overfitting”. Also re argument that the model doesn’t contain mickey mouse, that may well be empirically false, there is a burgeoning field of analyzing these AIs and extracting parts of training dataset from them. Of course, you are never going to get all the terabytes back out, but you may well get back out highly popular images.
Humans are prone to anthropomorphise things and unless you go out of your way to trip it, GPT3's output is easily coherent enough to maintain the illusion. You're absolutely correct that it's all nonsense, but it's also a very common line of thinking and entirely unsurprising to hear when you consider that these goobers have zero understanding of how anything actually works and treat anything resembling AI as a sci-fi magic box.
We’re so prone to anthropomorphise, we regularly find religious symbols in things like wood grain and toast.
Yeah another weird thing, apparantly those ai image programs dont get trained using ai generated data. I assume this kind of shit is done because else it will lead to the classical overtraining problems where the ml programs just turns into shit. Highly speculative of me, so take what I say here with a whole bag of salt not just a grain. But I assime this could lead to some problems. (Esp as apparantly training the gpt models costs millions of dollars, lol if that could all be wasted if you link the output into the input (and well it is trained on the internet).
Surely training it on the Internet is almost as bad at this point considering how many auto generated articles there are around, that and outright false information.
So far it seems to work. But, I have to add that I have no idea how much work they put into removing those autogenerated articles. (or, articles just copied from wikipedia/imdb etc etc with a few words added in (in fact there are entire channels on youtube who just read you the shuffled list of imdb trivia of a certain movie)). And iirc as soon as a model starts to get over-trained or stuff like that, you cannot reverse it so wonder if that will also happen to gpt models. I look forward to the people shouting 'the AI has dementia! this will be revolutionary for dementia research!' and have all the real dementia researchers go 'What? No ow god stop please!'
To be fair to Scott, hyperbole aside, he does state at numerous points that the AI is 'dumb'. Im unwilling to let him off the hook for this though as he does little to explain, in this essay at least, that the chatbot is little more than a refined chatbot and nothing even remotely resembling intelligence. There is also a bunch of esoteric spooky handwaving stuff and the aforementioned hyperbole regarding this being the potential herald of the apocalypse or whatever he was doing in the linked extract.
I think the article is a reasonable case for AI "alignment", in the sense that relying on massive linear algebra black box machines will likely produce unintended negative externalities. In other words, "we should fix the bugs in our coding process before putting it in charge of things like healthcare systems". The problem is where the scenario jumps from "A dumb linear algebra function does not understand human laws" to " a forward planning intelligent entity deceives us all and accumulates power until it can execute a 100% perfect plan to kill everyone on the planet all at once". For some reason evidence of the former scenario is taken as evidence of the latter scenario.
That's always been where these schmucks fall down. I can see fairly limitless scope for damage to the fabric of society with just a couple more iterations of the tools currently available even without the need to actually grant control of vital infrastructure to these systems. China has banned AI imagery without a watermark for very good reason. When it comes to actually constructing a coherent, true-to -life view of the world the waters are about to become a whole lot muddier. At no point does 'intelligence' factor into it. The issue is that people like Scott seem to willfully ignore the perfectly obvious and almost-within-reach damage that these dumb systems could be used for and jump right to super AI apocalypse. His most restrained scenario for misuse in this particular essay, which he just brushes over like it's a joke btw, is that eventually someone will make autonomous weapons that won't be properly regulated and deaths will occur.
Aaronson's [Reform AI Alignment](https://scottaaronson.blog/?p=6821) post was (IMO) a pretty good attempt to get the cultists to see reason, in language they would understand and from someone they would respect. From where I'm sitting, though, I can't see that it had any impact. Cultists gonna cult.
I don't know, I read it recently (I was originally curious if he was going to have another take on SBF), and while he understands the problem with overselling a "quantum wormhole", he's completely fine with AI being similarly oversold, going no further in critique than noting that skeptics call it a stochastic parrot. And he knows a number of people who may very well suffer some sort of coolaid drinking scenario, but he doesn't know anyone who's losing sleep over the quantum wormhole. So the decent thing to do would be to prioritize directing anti-overselling we-scientists-got-to-be-honest stance towards the frigging AI. Because the worst that is going to happen out of stupid wormhole reporting, is someone gets more funding than they should have, and the worst that's going to happen out of AI overselling is potentially a bunch of cultists winding up *dead*.
I think a better analogy is any other dangerous technology that disrupts the balance of power between an individual and society, like guns, or cars. It's not that one day an AI will wake up and want to take over the world or have secret goals. But it is probably the case that \*a human being\* will have nefarious goals and having access to (much improved) generative AI will increase the amount of damage that individual can do to others. (E.g. "write me a python program that can scan every major bank for vulnerabilities and then exploit them to install rootkits.") In that respect, it's like any other technological change -- figuring out what are the correct safety boundaries will require social and political mediation and likely ultimately some kind of regulation. The reason the doomsday cultists hate boring answers like this is in part that they don't believe in the efficacy of society or government as a matter of principle (they are mostly nutjob racist libertarians or adjacent) and also this isn't a story that lends itself well to milking people for donations so that they can write Harry Potter fan fiction.
also, Scott is absolutely one of the LessWrong AI cultists
GPT is effectively the predictive text from your phone on steroids. They’ve fed it a sizable chunk of the internet, so it can reliably regurgitate stuff that commonly shows up there. But when it comes to synthesizing new stuff it’s incredibly confident and typically really wrong.
Yes, all any of these ML AIs do is just linear algebra. I don't know that this is necessarily a good argument against some hypothetical future AGI being in some way similar to these systems. I can't be arsed to dig it out, but I believe I remember Scott doing something where he compared our existing understanding of how neurons work in the brain to current ML models. His argument was that they were very similar. At the very least, right or wrong, it insulates people from the "it's just linear algebra" as a defeater We don't understand consciousness. We don't know how it emerges. That leaves a very exploitable gap to argue that things like GPT, which clearly are not even close to conscious, might presage the advent of self-aware FOOM. Personally I think actual independent, dynamic, interactivity would be necessary for consciousness to develop, but that's just a guess.
I think it's this: https://astralcodexten.substack.com/p/somewhat-contra-marcus-on-ai-scaling The point for me is more like, why should anyone *worry* about ChatGPT? In what possible way can it be construed as vindication of the Yudkowskyan agenda? This is only possible within the context of a long chain of assumptions from that same agenda, something like: 1. GPT-3 can be scaled up until it becomes an AGI, with its overall design and engineering characteristics largely unchanged 2. When it becomes an AGI, it will be able to "unbox" itself, creating an existential risk to humanity 3. The only human intervention that can prevent this is to build the Acausal Robot God
I think a big part of the idea is that by trying to model text, the model of the text ends up somehow replicating human mental processes that create text, complete with some kind of motivations. Of course, the reality is that the AI in question needs enormous amounts of text for training. Far more than any human would ever read or hear in a lifetime. That is precisely because it isn't internally replicating human mental processes, but instead doing something that is a lot more data-storage-like in nature. Interesting advancements towards AGI are those that reduce amount of training data, all those game AIs that self-play, especially those that learn rules and then self play. An AI that is hooked up to a robot and which is structured to simulate the environment in one network, learn strategies in another network, etc. A robot that picks up a stick and presses a button (located outside its reach) with it - without any preconceptions or watching anyone do it - that's a lot more serious than ChatGPT. When that happens, rationalists will miss it, they'll be obsessing over GPT-6 attaining sentience. ChatGPT is dangerous for a different reason entirely. It is a near-human-level bullshitter, and it is a lot faster than humans. Bullshit has a lot of power over people. And we don't know if GPT can quickly become a lot more persuasive. The AI isn't any closer to replicating what a plumber does for a living, but it is very close to replicating what Hitler did for a living. And we are only worse off for it lacking any actual intelligence or self preservation.
I'm wondering if there even any way for these sort of hill climbing or evolutionary algorithms to end up trying to extinct humanity. Like the interaction with the outside world will be in trying out a range of strategies, and getting punished/rewarded for it, eventually settling down for a local minima in the reward space. But the "kill everyone to reap infinite rewards" plan seems like an all or nothing type situation. If you kill a thousand people, or a hundred thousand, or a million, you immediately get shut off. It seems like theres no real evolutionary way to jump that gap...
Yeah that's actually a great point. I remember back in the day one of AI researchers pointed out that you may well get to "friendly" AI by using human smiling as positive reinforcement for training, and of course Yudkowsky, being neither formally nor self educated, started going on about galaxies covered in smiley faces, because he never bothered learning how machine learning actually works - not even enough to write some bullshit sequence about it. In case of "AI", there's the optimizations that occur during training - e.g. the autoencoder is trained to reproduce input dataset most accurately after putting it through a bottleneck. The algorithm for doing the optimizations is pretty straightforward, completely unintelligent, and isn't trying to do anything like hack the internet to optimize better or anything like that. The notion that it has some overarching goal (beyond just going down the slope towards a local minimum) is just an artifact of language when talking about it. The function being optimized is so nasty and huge that there's no way to progress anywhere other than a local minimum. The resulting network, on the other hand, does not have any goal driven aspect to it whatsoever, it's not maximizing or minimizing anything at all, not even a mathematical function. Another thing is that real world is 1: low throughput and 2: doesn't provide derivatives and has a lot of random noise. So any actual training is done on some model of the world whether that be a huge dataset of everything anyone ever wrote, or outputs of another neural network.
Yeah, I was trying to think of a way for machine learning to become super-evil, and the best I could come up with was the AI designs a secret oracle that calculates the best way to satisfy secret goal function G, and the higher level AI evolves to just do whatever the oracle function says. But I'm not sure that really works, because the oracle is under the same evolutionary pressures as the higher AI, so it also dies if the machine gets too rebellious. If we think of AI designs as "evolving", then the relevant analogy is not random natural selection, but *selective breeding*.
The other thing is that the "goal" is defined purely mathematically, with no connection to the real world. The closest that research got is one network learning to simulate the real world and other networks learning to get something out of the first network. Those game playing AIs that learn the rules and self-play. There's just no point where it turns into "but if I hacked into other computers I could get more computing power for the neural network I'm optimizing so it could get a higher score". That would be a different problem, a problem of optimizing a different, bigger neural network. That's the issue with mathematically defined goals, say you have a robot that needs to get from point A to point B. The idea of scary superintelligence is that it invents teleportation or something, the reality is that you start off with it just walking there and then maybe it just moves its legs very very intelligently to get there 0.1 seconds sooner, because the actual goal for optimization is making gradual improvements to its gait and not, as a layman might naively assume, getting to the point B in the physical world.
Neurons in the brains don't work anything similar to neurons in neural networks, except for the general idea of building a more complicated function by combining simpler, many-input functions. In particular there is basically no plausible way how real neurons could do back-propagation.
I think that our lack of understanding of consciousness is kind of the core problem here. Assuming that it's possible (which should be understood as a massive and totally-unproven assumption) to accidentally create consciousness in a sufficiently complex algorithm, how would we even know? What would that even mean? Not in the sense of woah-I'm-so-high-look-at-my-hands philosophical naval-gazing, but in direct this-is-what-words-mean terms? Without a usable definition of consciousness or a means to detect it it seems like all the AI alignment hubbub is putting the cart several decades at least before the horse.
Eh, I think this sorta misses the mark. It’s the kind of thinking that leads to bizarro conclusions like “only humans are conscious, self-aware, sentient beings”. The problem isn’t that precise definitions and mechanisms are lacking (although they are), but rather that bad faith, or simplistic, actors take advantage of that fact to argue using magical thinking. Frankenstein's monster is the classic modern example of this kind of “fears made manifest through fiction”, but it’s not fundamentally any different than dramatized fears about change throughout history.
I think the "function call" argumentation is the wrong way to go here. There’s this whole idea of the [computational theory of mind](https://en.wikipedia.org/wiki/Computational_theory_of_mind?wprov=sfti1), which would consider the human brain nothing more than a rather elaborate function call. We can’t really answer the question of "what is thinking?" yet, and on the physical level, there’s nothing spectacular about a brain (ignoring any metaphysical beliefs here, of course). I mean, even in a computer, a function call can do *anything*. Still doesn’t make GPT anything more than some crude mathematical calculation that strings together words according to some statistical model.
>Where exactly does the "break out of the box and kill everyone" happen? It's literally just solving math equations! Wouldn't it need internet access to break out of the box and kill everyone? Right now it only responds to prompts. Once someone allows it to freely post and act online (and the tech improves to make it more intelligent) that seems like when it becomes dangerous.
i think you put the important and very science-fiction-y part of your comment in parentheses
These models are being cross combined. Once training on sensorimotor feedback in our physical world passes an arbitrary threshold, I suspect we’ll see another sudden jump in AI capability with respect to spatial/numerical reasoning and symbol grounding in general, specifically with respect to robotics. All this multidimensional, multimodal training encodes semantic understanding in ways we cannot yet fully understand due to the very nature of how deep learning encodes information. If you imagine the neurons in a brain as a graph in 3d space then any subset of connections can, I think, be modelled with tensors. At our current level of understanding it’s just numbers; weights and biases but it’s seems to me like some level of symbol grounding is already happening. It may be a gradual process instead of a sudden awakening like in pop culture. This is also happening as hardware optimises for the relevant maths (neuromorphic computing). It’s basically following an evolutionary approach with compute at massive scale with what I assume is an underlying physicalist interpretation of how consciousness arises. I personally believe AGI will happen. No bets on timing. The main dangers are the biases in existing data sets being perpetuated, and not really knowing the inner world (insofar as there is one) of the AI agent. Robert Miles has good videos on AI safety and the orthogonality thesis. Skynet AGI is in theory possible but low probability IMO. It’s all about the wider context; what it’s used for and if it’s democratised etc. Who has power over the tech. The rationalists have their head in the clouds while genuine problems exist here, now, globally. My worry is that this tech stays in the hands of our corporate tech oligarch overlords.

From elsewhere in the essay:

This strategy might work for ChatGPT3, GPT-4, and their next few products. It might even work for the drone-mounted murderbots, as long as they leave some money to pay off the victims’ families while they’re collecting enough adversarial examples to train the AI out of undesired behavior. But as soon as there’s an AI where even one failure would be disastrous - or an AI that isn’t cooperative enough to commit exactly as many crimes in front of the police station as it would in a dark alley - it falls apart.

There’s a lot to unpack here, but to my mind the part where he sets up a scenario in which, in reality, the tenuous point he is trying to prop up is totally undermined (sure scooter, we’ll go right from accidental deaths at the hands of semi-autonomous weapons to unleashing some kind of omnipotent AI superweapon into the nuclear arsenal and in the intervening decades/centuries we’ll just sit on our hands) is fairly textbook Scott.

Given that his friends, the EAs, are, presumably, about to be given front row seats to a fairly substantial regulatory shit storm a la bitcoin courtesy of SBF, you’d think he’d be able to extrapolate.

It feels to me as though in the past few years, Scott has retreated into a bubble of like-minded friends and colleagues, with deleterious effects on his thinking. The argument in this piece is totally unconvincing to anyone outside the libertarian doomsday cult.
I haven't been following him for that long but I have looked through the archives to see how he used to act when he was at his peak. He has definitely become lazy, he doesn't bother to coin silly terminology at every turn and he mostly doesn't even bother to try with his faux intellectual nonsense, stringing together a number of seemingly (and, by and large, actually) unrelated esoteric ideas into one incoherent whole. I didn't see a single Yiddish word in the whole essay and only one reference to Revelations, he didn't even reference himself! He's becoming more and more like a normal blogger, if he wasn't so self assuredly wrong he would be tolerable.
> It might even work for the drone-mounted murderbots, as long as they leave some money to pay off the victims’ families while they’re collecting enough adversarial examples to train the AI out of undesired behavior. But as soon as there’s an AI where even one failure would be disastrous I like the implication that having your loved one murdered by a drone-mounted murderbot is not disastrous, as long as you get paid enough money for it. This might be peak rationality.
I find the idea of AI weapons almost laughably dumb. War is the classic example of an adversarial situation where successful strategies stop working quickly because *the other side reacts*. It’s the 21st century equivalent of using gas once, and then discovering to your dismay that the other side suddenly issued gas masks. Your opponent gets an opportunity to *react* too. Until you create an actually creative AI, that shit is not going to work well.

“I’m not a cultist, but I am right” is some weak shit.

Know who has the best take on what “AI” will do? The fucking Marxists do. Not that Scott would ever accept that.

Note: this is for the “AI” that actually does exist, which will be used to automate some sorts of labor, not the fantastical AGI AI the cultists are worried about.

"I'm not a doomsday cultist, but when I picture an illustrative analogy for my situation it's one which involves a doomsday cultist witnessing the oncoming doomsday as predicted by said cult"

” i mostly reject the accusation”

Well he did say he's part of a polycule, so maybe there is something to the cult aspect.
Come on of all the things they say and do the poly things isnt the bad stuff. Consenting adults and all that.
it was hilarious when the FTX Polycule hit the news and the crypto bros were ***OUTRAGED*** like more so than about all their fucking money being stolen. Ordinary plain people getting laid when they weren't.
Yeah, I was rolling my eyes a lot at that kind of stuff. 50b+ dollars and you care about some sex? Wtf.
50 billion dollars is just money, but sex is fruit and cake!
bitcels
No indeed, I don't mean to imply that being sexually explorative is inherently wrong by any standard that involves consent between adults. Moreso that there seems to be an overlap between cults and non-traditional group sexual relationships.
Yeah I wonder if the latter there is because of them being more open to new experiences, so they would also then be more likely to try non-traditional relationships. But it might also just be tacked on, as in the cult leader going "I got them eating out of my hand, and I am feeling a bit horny, why not?"
I guess it depends on the style of cult. If you include religion there are plenty that explicitly don't condone non-traditional sexual relationships. I suppose if your cult is based on rationality and one aspect of that includes the triumph of rational decision making over emotion, you might come to the conclusion that the only reason that non-religious people might typically still engage in single partner relationships is because of pesky emotional attachment, which you have forgone. If it works it works but to my mind there is a number of sexual partners in one relationship structure (I.e. Not just a person who has a wide variety of sexual partners who may or may not be aware of each other, but rather an actual structured relationship that each of the members of the structure would actually admit to existing, beyond just casual sex) above which I would start having questions about the other aspects of one's lifestyle that led to this situation arising in the first place, and I wouldn't be surprised if those circumstances included being a member of a group that fits the description of a cult. Maybe I'm just a puritan.
I’ve got such mixed feelings about polycules. Like yeah, in theory consenting adults shouldn’t be looked down on for whatever sexual relationships they wanna have. But it also seems like the sort of people who typically get into poly are often selfish and immature, or the sort of people who just dive into things without thinking carefully about whether it’s a good idea. Like the kind of people incapable of learning from the mistakes of others because they assume that the others were just doing it wrong but when *they* do it they’ll be fine because obviously they’re way smarter. So when I hear about a catastrophic collapse that lost a lot of people’s money due to having almost no rules or oversight, and then find out that they were also poly, it’s like “of course they were.” I guess you’re way more likely to hear about the bad situations. But I just don’t have a ton of good poly situations as examples to balance it out.
I heard of enough good going poly situations. And yeah compare them to normal rels where the same stuff happens. Knew a lot of people who took getting into poly pretty seriously (in my exp they took I more seriously than others in tge same social group who were mono). So guess it depends on the situation.
>But it also seems like the sort of people who typically get into poly are often selfish and immature(...) Coincidentally, this is how i feel about many people i know who aren't poly, often ending up in disastrous outcomes. Maybe some selection bias going on here, yeah?

He doesn’t refute it, he denies it

“Captain, come look at the scanner. It’s as if this life-form has suddenly become self-aware.

The old Scott and Bailey

So…he in no sense rejects the accusation?

Accidental self-Bulverism!

The real danger isn’t AGI. It’s AGI being used to further consolidate power into the hands of those who already have more than enough. Musk benefits from being in the right place at the right time. He’s spiralling to the right. He also happens to be an investor in OpenAI.

Longtermism is a fucking joke when there are more important problems across the planet, now.