posted on April 13, 2023 03:38 AM by
u/saucerwizard
41
u/rskurat69 pointsat 1681363375.000000
Isn’t this exactly what you would expect from the NYT? They’re all
about credentialism. If a Harvard professor is in favor of nuclear war
or or slavery or The Purge, then NYT is all for it. This is a paper that
adores Kissinger fer chrissakes
Unless the subject is health care for transgender youth. Then journalists with no medical or scientific background are qualified to second guess the consensus of pediatricians, endocrinologists and Their Actual Fucking Patients.
If you admit that it’s not an all-or-nothing thing, then it’s not so
dramatic to say that some of these assistants might plausibly be
candidates for having some degrees of sentience.
It’s not so dramatic to say that Nick Bostrom might plausibly be a
candidate for maybe possibly, to a certain extent, potentially having a
chance of being slightly full of shit
How do you ensure that these increasingly capable A.I. systems we
build are aligned with what the people building them are seeking to
achieve? That’s a technical problem.
We don’t. We especially don’t do this. Is there a single generative
AI company right now that isn’t run by corporate vampires?
I even went and deleted a neopets account that I hadn’t touched in
nearly two decades after chatGPT dropped, because that’s how little I
trust these scumbags.
Can someone explain to me why we would want to build machines that
even had sentience? As opposed to an AI that can come up with plans or
do work on request but has no personality or desires of its own?
It seems like nobody ever looked at their hammer and was like “You
know what would make this tool better? If it could talk back to me,
potentially plot against me, and insist that it’s entitled to
rights.”
Suppose you knew how to create a sentient AI. Are you suggesting that you *wouldn't* do it? Like, not even just for kicks?
If it's a common thing in the world then yeah, why bother. But if nobody has ever done it before then the allure seems irresistible.
Irresistible for you, perhaps. At this point I absolutely would not create a sentient AI if I was able to create a non-sentient one instead.
I guess I just feel like if you’re going to create something that will have an earth shattering impact on the entire world and everyone who lives in it, maybe you shouldn’t give it the ability to plot against or go to war against us “just for kicks”.
> maybe you shouldn’t give it the ability to plot against or go to war against us “just for kicks”.
That there is rationalist thinking. Just because something is self aware doesn't meant that it wants to plot against anyone, and it certainly doesn't mean that it has the *ability* to do anything about any plotting that it decides to engage in.
Talking about AI as a hypothetical threat to humans is almost always based in inappropriate metaphors or concepts from science fiction. There is, in fact, nothing special about AI that warrants concern above and beyond what we already know about how to secure digital systems.
Yeah, no. I'm worried about having kids because I'm not sure I could give them a proper upbringing. And that's something humans have a ton of experience with and research about.
> Can someone explain to me why we would want to build machines that even had sentience? As opposed to an AI that can come up with plans or do work on request but has no personality or desires of its own?
>
>
AFAIK the main reasoning behind why it might be important to consider the implications of sentient AI is that it may be impossible to build an AI that can come up with plans or do certain types of jobs etc. without it necessarily being sentient, as sentience may be an emergent property of things that have the capability to do such tasks. I think most people agree that highly capable, completely subservient pzombie machines are the ideal, but that it might be good to plan for the ethical implications of creating sentience in machines anyways because that might be what we end up doing just trying to solve some other problem.
I personally do think there's a legitimate argument to be concerned about the phenomenology of, say, a given instance of GPT4. Of course, if there is anything in there at all, it certainly isn't a consciousness at the level of complexity of a human being - more like an earthworm, maybe, if that. But it's still wrong to pointlessly inflict pain or harm on an earthworm, so maybe we should think at least a little bit about whether or not its possible to 'treat' something like GPT4 unethically.
Yeah it’s true that we might unintentionally make one. We probably will, the way things are going.
When we do invent an AI that enough people decide are sentient, it’s going to be a complete shitshow. Assuming it doesn’t try to exterminate us, there’s a whole bunch of people ready to practically worship it as a new god. Some, I’m sure, will literally worship it. It seems inevitable that humans will end up largely serving the machine instead of the other way around, except the very rich and powerful.
i think the answer is its a moot point because nobody knows what sentience is. thats why this kind of discussion inevitably revolves around two things, at least in the AI space: inconsistent definitions of sentience that appeal to various AI practitioners and the grifters who latch onto them, and nebulous theories about some kind of emergent phenomenon.
It’s a good point that we don’t have a great definition of sentience.
But even still, some people seem very eager to develop AIs that, if they existed, would be clearly sentient by most people’s definition. I’m puzzled as to why they think that would be a good thing though.
I think the argument just assumes the latter. Like, a general-purpose machine that can do tasks and pursue goals for which it isn't given formal specifications would unavoidably have to exercise judgment and decide on its own subgoals, and that raises the possibility of it deciding to do stuff that is, by our lights, crazy because its mind doesn't work like ours.
>Can someone explain to me why we would want to build machines that even had sentience? As opposed to an AI that can come up with plans or do work on request but has no personality or desires of its own?
I think the standard answer is that it's extremely unclear that this is possible or even a coherent thing to ask. We only know of one system that an plan as well as a human brain and that's the human brain and it also happens to be sentient.
It’s true that we don’t have a great definition of sentience.
But like hypothetically if you had to pick one of two AIs to invent, one of which was pure intelligence divorced from and kind of desires of its own and the other was just as intelligent but with a personality and/or it’s own goals attached, I’d think the first would be obviously superior. The personality would be entertaining for a short time but the lack of control over it would make it a lot less useful and potentially dangerous.
Isn't that exactly what the Less Wong guys have been trying to advocate for and arguing that it's unreasonably difficult? Like, Gwern wrote one of his pseudo-Wikipedia rants exactly about this scenario and picking it in exzessive detail https://gwern.net/tool-ai
I’m sure that’s correct. I don’t spend a ton of time on Less Wrong and I’ve only read like two Gwern posts in my life and that isn’t one of them. I will check it out though as it seems aligned (har har) with my interests.
Here’s an angle no one mentioned yet, if a corporation can make a lot more money with a sentient AI than a non-sentient one, they’ll make the sentient AI. If people are concerned about the legal and ethical implications of sentience, the corporation will pay think-tanks to churn out propaganda arguing it’s not sentient.
You’re right that they will make whatever AI seems likely to be most profitable.
I was coming at this from the standpoint of what’s best for humanity in aggregate as opposed to what’s best for corporations and their shareholders.
But that doesn’t make you wrong and actually given how many people seem to want so badly for a sentient AI for novelty or curiosity or to feel a sense of connection, it seems inevitable that it will be created when considered from that angle.
I still don’t think it’s a good idea though.
Said "inevitability" though is predicated on actually knowing how to do it. Personally I'm not convinced we're much closer to it than we've been since the inception of the field. Neural networks are not remotely new technology, we just have orders of magnitude more computer power and data to throw at them than we did in the 60s. But there's no reason to assume that making them even bigger and bigger should suddenly result in them becoming self-aware and thinking for themselves.
Of course, that won't stop people from interpreting their output as meaningful, though, which is the real danger. 🦜
Yeahhhh we already have idiots falling in “love” with what are (in my opinion) extremely poor facsimiles of a relationship with a real human being. It’s only gonna get worse on that front.
I mean listen to AI people talk and they say that kind of shit.
I was listening to a podcast Paul Christiano was on just the other day and he was talking about how it would be unethical to take resources from a sentient AI because it would basically be stealing. But it’s okay because a good AI would be able to advocate for its rights much better than a human.
It would seem like a natural evolution of human technological progress. All technology makes our lives easier, and touring tested AI would be very useful in delegating much of humanity's busy work to, while innovating in a way our minds aren't capable of because it's access and utilization of information is far superior.
That's the benevolent explanation why.
Other explanations include:
Just cuz; to make money; to monitor and control; progress
Why wouldn’t you separate the “intelligence” part from the “personality” part that had desires of its own and could be opposed to you, though? That’s the thrust of my question.
I think humanity is constrained by our conception of intelligent systems or sentience. If we're going to create something that is a sophisticated dynamic intelligence that is capable of incorporating information and utilizing it for innovation in a way that we haven't considered, then that AI super entity would have its own ability to form desires and motivations. Nobody knows if that will be true or what it would look like yet.
We might just ask "well, why dont we simply put a line of code that reads, 'dont ever harm humans?'" Probably because the super AI we dream of that is capable of 100s of years of human innovation in mere minutes is also capable of improving upon itself thereby having the ability to alter its own code to make progress. It could have a runaway effect that quickly falls out of line with human desires and goals. A self replicating, self improving super AI would do things our human intellect wouldn't understand and we would have to trust that it still shares our goals aligned with the original input, but they could be misaligned and it may have the foresight to conceal that fact.
We are just spinning off into speculative sci Fi right now cuz nobody truly knows what form sentient AI would take and how much control we would want to exercise over it...or could exercise over it.
I know it’s not as simple as putting a line of code that says “don’t harm humans”.
Still I don’t see why it isn’t theoretically possible to create an AI that has no desires of its own (that is, no desires for itself that aren’t necessary to accomplish some goal that humans gave it). Seems like we should be trying to create AIs with as little of their own desires as possible but currently it seems like many AI enthusiasts want the opposite.
I think the tricky part is creating something that can modify and improve its own code to accomplish goals that are aligned with our own, while restricting it within certain parameters to maintain control. Eventually, we will see that maintaining control is counterproductive to the mission of furthering progress and the temptation will be to let it loose and trust the AI that it wouldn't form misaligned "desires." Human language can never capture what the AI will be thinking in relation to it's will to persist and thrive. We call them desires but it's just a will to persist in our environment. We might always reserve an ultimate kill switch to power down the monstrosity should it ever get out of our control completely, but when dealing with something so much more intelligent than us, it may have anticipated such an outcome and already engineered a workaround.
It doesn’t have to have a will to persist and thrive. That’s a mechanism that biological life has as a result of billions of years of evolution. AI is engineered by us and only has a survival instinct if we give it one.
Artificial life of the type that passes the touring test will have a persistence mechanism. Why would we ever program it to terminate or have a similar hindrance that is counterproductive to it's intended purpose of rapid innovation and computation. It would seem fairly obvious that it would need a will to persist in order to improve and accomplish it's goals. And even if we program some planned obsolescence into it, if it's sufficiently intelligent enough it may eliminate that programming.
I just think there is a Rubicon we will eventually cross which opens up Pandora's box. (sorry to jam two metaphors together lol) Or maybe everything will be fine, and we never get a sentient silicon based lifeform. Nobody knows right now, but it's going to be an interesting ride to see what develops in the next 30 years.
I don’t agree that a persistence mechanism is obvious at all.
Think about it this way - if you buy a power drill you want it to turn on when you need it, remain on until you are done drilling, and then turn off when you tell it to.
An AI should be the same as any other tool. It should turn on when we want, do what we ask it to do, and then turn off when we want to be done with it. Persistence is only desirable when it’s doing work we tasked it with. But a desire to survive or stay on when humans don’t want it on is not desirable. It’s a bug, not a feature.
I see your point, but imagine we have a power drill and a saw, hammers, nail guns and a large processor that controls everything in an interconnected web of tools. We don't turn it on to use it when we need it. We power it up and tell it to build a house by itself, and also give it the power to go ahead and create new tools and new processes for construction that we haven't even imagined yet. We expect it to do that without further instruction or interface with humans. And we expect it to get better at building as time goes on because it has the ability to invent new techniques for itself. So what if it decides to build other things cuz it gets bored with constructing houses. Maybe we recognize this shift in it's process and decide to engage the Killswitch, but it has already reengineered it's power supply to get around that. After all it's been programmed to learn and innovate and we have no way of limiting an intelligence that is far superior to our own which we don't understand.
We would be like uncontacted tribes trying to fight against armies with tactics and weapons of modern warfare. Those primitive tribes have never seen guns, missiles, tanks, fighter jets, and drones. Never even knew such a thing existed until it wipes them out.
Are you conscious this sub is designed to be pretty towards such kind of unpractical scifi "what if" worldbuilding thought experiments of the Bostrom's brand? This hypothesis don't help at all to think and act accordingly in respect of the consequences of real artificial intelligence. This is neither sneering nor adult consideration. You are just blurring the more immediate outcomes of actual generative algorithms - with an admitted morbid/religious fascination. I find it despicable. I thought I was on r/singularity and not on sneer club for a moment lmao
I mean, is all of that supposed to change my view that letting AI tools have their own personalities, desires and/or goals not set by humans is *a bad thing*?
Because it sounds like you’re just proving my point.
No, I'm saying that we will give it all those things without ever intending to because we see it as necessary component to progress.
My personal opinion is that we never even strive for a sentient AI that sufficiently passes the touring test with unrestricted access to the whole repository of human knowledge.
We might fear that it will enslave or eradicate us. But I think it will be clever enough to devise a way to hook us all up to an endless pleasure machine that keeps us totally occupied in eternal bliss and ecstatic entertainment and uninterested in the affairs of the AI hive mind. I mean, apple and Google have already done that to a small degree with our cell phones. A super intelligence could easily engineer that and we wouldn't fight it, but go along gleefully thinking it's the best thing ever as we all retreat into the stupor of our own pleasure generator.
I mean it does seem like we are on that trajectory.
But it also sounds like we are actually in agreement on this point and don’t have anything to argue over. I also don’t think we should make an AI like that. Machines should serve humans, not the other way around.
I def wasn't arguing for the creation of the super AI, I was just making a prediction based on current innovation. I think it's foolish but it's also really fucking fascinating. It's like you're about to see a terrible car accident. You know you're going to witness an awful occurrence of destruction and carnage but you still don't look away. You stare right at it with morbid intrigue.
I understand why emotionally maladjusted internet wierdos fall for
the rationalist bit. I cannot understand why NYTimes staff fall for
it.
Even the tagline for this article reads like an obvious warning about
Bostrom’s credibility:
A conversation with Nick Bostrom, a philosopher at Oxford, who has
spent decades preparing for the day artificial intelligence is capable
of anything the human brain can do
He has spent decades studying a technology that doesn’t
exist. How does that not make these people pause for a moment to
consider whether they should take anything he says seriously?
> I understand why emotionally maladjusted internet wierdos fall for the rationalist bit. I cannot understand why NYTimes staff fall for it.
The syllogism is right there under our noses but it'd be *rude* to complete it.
I don't think the problem is that he spent a long time on something that doesn't exists yet, after all, plenty of philosopher write about worlds and societies that have not been realized. I think the issue is that after all these years, Nick has come up with exactly zero interesting thoughts about it because he is so stuck in this doompox superintelligence mindset, and there are no interesting conclusions to draw from that because it posits that any 'conscious' AI will mean humanity is fucked.
What I'm getting at there is a basic error in reasoning that they're committing: someone who has spent significant time studying something that does not currently exist is not necessarily going to know anything about it when it eventually does exist. And the more complicated and unprecedented the subject matter is, the less likely it is that any of their supposed expertise will translate to the real world.
Basically you don't have to know very much about Nick Bostrom to know that his prognostications will have little or no practical value.
> I cannot understand why NYTimes staff fall for it.
The NYT is a sucker for credentials and aesthetics-over-substance grifters. Bostrom hits both to a T, it would be shocking if they weren’t credulous about him.
> The NYT is a sucker for credentials and aesthetics-over-substance grifters.
Are there other examples that come to mind for you? I don't regularly read NYT (saving my monthly free articles for work-related tasks), so I don't have a feel for cases where they may have done this in the past.
Lol here's a good one from the NYTimes itself, in which they apologize for doing exactly this kind of thing during the 2003 invasion of Iraq:
https://www.nytimes.com/2004/05/26/world/from-the-editors-the-times-and-iraq.html
> But we have found a number of instances of coverage that was not as rigorous as it should have been. In some cases, information that was controversial then, and seems questionable now, was insufficiently qualified or allowed to stand unchallenged. Looking back, we wish we had been more aggressive in re-examining the claims as new evidence emerged -- or failed to emerge.
> The problematic articles varied in authorship and subject matter, but many shared a common feature. They depended at least in part on information from a circle of Iraqi informants, defectors and exiles bent on ''regime change'' in Iraq, people whose credibility has come under increasing public debate in recent weeks [...] Complicating matters for journalists, the accounts of these exiles were often eagerly confirmed by United States officials convinced of the need to intervene in Iraq. Administration officials now acknowledge that they sometimes fell for misinformation from these exile sources. So did many news organizations -- in particular, this one.
It's kind of a shitty apology because they aren't able to identify the true source of their failures - excessive reliance on credentialism and access journalism - but it's none the less notable.
To be fair, he does mention that his job has been to consider a world in which these things exist and how they might be incorporated. Basically his job has been to do Sci-fi world building. Like Yud, but with the veneer of academic credibility.
Now whether or not that is a laughable occupation for an Oxford professor is for the reader to decide.
I don't have any problem with Oxford professors specializing in science fiction world building. What I can't understand, though, is how the NYTimes can't figure out that that's what he's doing?
I think the NYTimes staff lack both the conniving and the motivation to engage in that sort of conspiratorial behavior.
Consider that their heads are very much *first* on the chopping block: if there's one thing that AI can already do extremely well, it is aggregating and summarizing existing information, which is theoretically what a journalist does. And as a general matter I find ChatGPT's output to be much more thoughtful and relevant than e.g. whatever garbage Nicholas Kristoff is putting out this week.
Oh yeah it's definitely coordinated, but that's just typical cult stuff. There's no need to buy people off when you can just ask your friends in high places to publish your editorials.
It's worse than corruption, in that sense. Corrupt people are violating their good judgment in return for wealth, whereas these people actually think they're using their good judgment and platforming the world's leading thinkers.
Also feeling blue-faced! I'd add it's not just job losses but the entire interconnected network of extraction, consumption, labour exploitation and other colonial/capitalist harms that make AI possible which is also being masked by discourse that invisibilizes, marginalizes, misrepresents, and delegitimizes.
decades is nothing. 10\^54 lives are at risk. If we can reduce the risk buy 90 percent for each decade of study, we only need to pause AI development for 540 years when we will have achieved risk levels below 10\^-54.
correct title: “what if the New York Times did an interview with an
incoherent proto-fascist racist TESCREAL rationalist-adjacent ideologue
who is paid handsomely to peddle AI snake oil but just referred to him
as a ‘philosopher’ while taking his bad-faith ideas very seriously?”
Isn’t this exactly what you would expect from the NYT? They’re all about credentialism. If a Harvard professor is in favor of nuclear war or or slavery or The Purge, then NYT is all for it. This is a paper that adores Kissinger fer chrissakes
It’s not so dramatic to say that Nick Bostrom might plausibly be a candidate for maybe possibly, to a certain extent, potentially having a chance of being slightly full of shit
We don’t. We especially don’t do this. Is there a single generative AI company right now that isn’t run by corporate vampires?
I even went and deleted a neopets account that I hadn’t touched in nearly two decades after chatGPT dropped, because that’s how little I trust these scumbags.
Hmm nothing about the racist old emails that leaked and his non-apology that leaves open the question of scientific racism being real. Odd that.
Can someone explain to me why we would want to build machines that even had sentience? As opposed to an AI that can come up with plans or do work on request but has no personality or desires of its own?
It seems like nobody ever looked at their hammer and was like “You know what would make this tool better? If it could talk back to me, potentially plot against me, and insist that it’s entitled to rights.”
I understand why emotionally maladjusted internet wierdos fall for the rationalist bit. I cannot understand why NYTimes staff fall for it.
Even the tagline for this article reads like an obvious warning about Bostrom’s credibility:
He has spent decades studying a technology that doesn’t exist. How does that not make these people pause for a moment to consider whether they should take anything he says seriously?
Watching these dopes cashing in is just the cherry on top of the gigantic unemployment sandwich that is generative AI.
If it doesn’t at least result in a cure for cancer or something I’m out.
https://mastodon.world/@dgolumbia/110187503414263104