r/SneerClub archives
newest
bestest
longest
Area AI expert worried that chatbot needs help because it was trained on dataset with lots of help requests in it, still hates chickens (https://i.redd.it/glw0qlmhwow71.jpg)
323

Respondent on Twitter: “I have raised chickens from tiny chicks to egg-laying adults and later burried them, and I have written NLP systems from low level components.

Your tweet above is the most ridiculous thing I have read today.”

Yud’s reply: “I’ve also written query-key-value matrices and variants on the Adam optimizer in Python, and first wrote commentary on Chalmers’s hard problem in 1995, dear credentialist.”

>wrote commentary on Chalmers's hard problem I like how the best 'credential' he can think of here is that he said some stupid shit about a smart thing that someone else came up with
yud has never met a chicken, but doesnt consider that fact relevant to the conversation
* His wiki page notes that he is an autodidact, meaning he did not complete higher education and has not worked in an academic environment. Education credits are not everything, but part of the reason they exist are to prove you actually understand the fundamentals of your field. * He is known for heading the Machine Intelligence Research Institute (MIRI), a private non-profit which exists to address imagined existential threats from AI. It was funded by Brian and Sabine Atkins who, from a quick search, have apparently done nothing but fund the MIRI. * The rest of his activity largely looks to be blogging, playing up imaginary doomsday events regarding AI, and publishing Harry Potter fanfiction based on his blogging material about the scientific method. Credentialism is when academics with relevant experience claim to have greater authority than unqualified science fiction bloggers making shit up on Twitter.
It'd be fine if he had no relevant degree but actually worked in the field, which he didn't either. TBH it is completely obvious that he read some for layman explanation of neural networks once, and that was about it. He also didn't read any explanation of chickens and isn't smart enough to see that what chicken do at the minimum is pretty damn complicated.
I'm pretty sure he did nothing in Python in 1995. edit: That being said it is funny how an "AI safety expert" can't even claim some technical expertise that's relevant to his blabbering about AI.
[removed]
["I can read HTML but not CSS"](https://www.reddit.com/r/SneerClub/comments/p0izej/yud_computers_are_the_greatest_possible_risk_or/) ---Eliezer Yudkowsky, 2021.
In 1995, I wrote a simulation of a predator-prey ecosystem in BASIC. I reinvented the distance formula from the Pythagorean theorem in order to tell from their coordinates whether one organism was within another's detection radius.
But how many Harry Potter fanfictions do you have on your belt, huh?
Sadly, none. Once I thought up the title *Harry Potter and the Masque of the Red Death,* I knew that I could never write a story that lived up to it. So, it is only one of the fictional books within my [*Daria*-*Sandman* crossover novel](https://archiveofourown.org/works/8165015?view_full_work=true).
That's a good title.
That responder isn't even citing credentials. They're describing things they've done.
That's what credentials are bro
[deleted]
How dare those who know things by doing and have practical knowledge lord it over those who have THOUGHT endlessly about what they might do if they ever were in a position to DO SOMETHING... Jerks.
Yeah I can’t get over how “I have experience relevant to assessing this claim” is twisted into “credentialism!!!!”
How dare you question me The Dear Leader Leader.
Well, getting the word 'credentialism' wrong wasn't on my bingo card, this is an odd new development.
Wow, Yudkowsky wrote query-key-value matrices in python! That's some real cutting edge matrix multiplication. I don't understand how Yud survives in an environment where people know what this terminology means. Everything he writes on statistics, ML, AI, etc is consistently at the level of a sophomore undergrad student.

Denying that animals suffer is so inconceivable to me. It’s just base line. But luckily EY is here to give use his great heterodox thinking to deny that that (at least some) animals suffer. Truly a big brain there! I am so impressed!!!!!!

You know, the real problem with these fucking idiots is not that they don’t have credentials (because really credentials are eh), it’s that they are ignorant and really proud of their ignorance. They just begin with their metaphysical theses about the world and reason from them. And what a shock that everything in their world coheres with them. Maybe that’s why changing your mind was so far down on the sequences: because these fuckers are so bad at it.

Most people, dare I say almost everyone, are terrible at changing their minds. It's something we often demand from others but rarely do ourselves. That said, denying the suffering of animals is ludicrous at this stage. Sentience is clearly a continuous variable, not binary. To paraphrase Hofstadter: even a mosquito has a soul, it's just a very tiny soul. I think we deny it mostly because we're social omnivores. Our morality has outpaced our biology.
Getting better at changing themselves is the alleged *raison d'etre* of capital-R Rationality - "overcoming biases" and all that. The actual output, however, is an increasingly convoluted framework to make a case for their own superiority.
>Most people, dare I say almost everyone, are terrible at changing their minds. It's something we often demand from others but rarely do ourselves. I've never been convinced of this. It's pretty easy for me to see that there are some things I haven't changed my mind about in the face of intense disagreement. But I've also changed my mind about many things. It's possible I'm special, but that seems silly. It's possible I'm obtuse about how little I change my mind. That could be true, but also seems more motivated by the theory than by my evidence. What is possible, is that the areas in which most people "go to the mat" so to speak are the areas in which they have strongly held beliefs. And that might give the illusion that people are not likely to change their mind in general.
Yeah, maybe, that's possible. I've just found that the more a strongly held belief is challenged, the stronger someone holds onto it. I can't say it's true for everyone but it's certainly true for me. I fight against that impulse hard. There's been some studies on it, including analyzing brain scans. Interesting stuff.
> I've just found that the more a strongly held belief is challenged, the stronger someone holds onto it. Well yeah, but that isn't because people never change their mind, but because directly challenging someone on a strongly held belief is just about one of the worst ways to change their mind.
Yeah, excellent point!!

Young Eliezer is handed a Teddy Ruxpin doll: “Ah, it asked me in words to help it! How could anyone think that Strong AI isn’t real!”

Teddy Ruxpin doll: "get back in the locker nerd"

[deleted]

I don't even understand how his arguments square with his expressed worldview. His blog post's real "argument" is just a utilitarian one: he's saving future galaxies, and avoiding meat would harm his health and productivity. So he gets a pass (naturally). Telling people not to worry about animal suffering and strawmanning those who do is just window dressing. But, surely the credence that non-human animals, which are really quite close to humans physiologically, experience suffering can't be that low. Maybe your model is wrong? Maybe animals do have inner observers, just really stupid ones? Yudkowsky gets a pass, since he's saving galaxies, but doesn't everyone else have to shut up and multiply the dust specs?

so what is he even arguing for? that other people are just dumber than he is? how can someone write so much without positing a single opinion or observation… is that what hes afraid of?

He wants a pass to participate in systems that abuse animals, so he engages in motivated reasoning. He wrote a blog post about why this is bad, except evidently he thinks it's okay when he does it, even in cases where it undermines his understanding of the problems he has set as his life's work to solve.
A vegan got under his skin so he's determined to use the only tool he knows, vagueposting about technical gibberish, to make himself feel better about killing for mere pleasure
~~That the 'cares about animals' part of EA is wrong.~~ he explicitly says this isnt the case later. My bad.
He thinks gpt3 is sentient
No, he says it's not sentient he just thinks that chickens are equally not sentient, and this is somehow a dunk on people who think chickens are sentient but gpt3 isn't?
Mega super brains like him understand that neither gpt3 nor chicken are sentient, which is at least 2 levels above dum dums who think that gpt3 is not sentient and chicken are. If you were almost as rational as him but didn't have as big of a brain you'd think gpt3 is sentient but not chickens.
Sorry, what I should say is: he thinks it is more likely that gpt3 is sentient than that chickens are, and that everyone else should be humble about these things but that he shouldn’t be
Not what he said or suggested. He suggested it's silly to argue chickens are sentient but the other isn't.
That directly implies a belief that GPT3 is at least as sentient as chickens. Check by negating it: If you believe that GTP3 is less sentient than chickens, why would it be silly to argue that chickens are sentient but GTP3 isn't? There's room in the gap between them for that to be consistent.
>at least as If he thinks neither are sentient, he can't think one is could be *more* sentient than the other. He thinks that they are *equally* nonsentient.
I don't have enough specific knowledge of the two entities, or sentience philosophy, to make a distinction myself, but I'm just saying that HE'S saying the behavior of the bot should be enough to imply sentience of you believe the behavior if chickens is. What he didn't say (at least in these tweets) is that chickens are less sentient. Like nothing he says here actually is making any comments at all about that.
Except GPT-3 is actually just a sentence generator, and not a particularly good one if you let it run for any length of time (the edited Guardian article it produced had signs that it was written by someone who didn't actually understand what they were saying, if you are looking for them). Chickens, on the other hand, show some signs of actually thinking about what they are doing, mostly through showing problem-solving behavior.
Whether he is right is irrelevant to *what he said*. Here, he said they are *equally* sentient, in that neither are sentient. If neither are sentient, one is not more sentient than the other.
Chickens are sentient, though. Also, he says later in the Twitter thread that he is "more worried" that GPT-3 is sentient than chickens are.
It's irrelevant whether chickens are sentient. This conversation was about what he *said*, not whether he was correct. He said he was "more worried" about GPT-3 elsewhere, but that's not present here. Moreover, being "more worried" that X is sentient than that Y is sentient does not imply believing that X is *more likely* to be sentient than Y. I could be more *worried* that X could be sentient if I believed that X's sentience could be more dangerous, more disruptive, or more puzzling than Y's sentience, without believing it was more *likely* that X was sentient than Y.
He didn’t say it in the two tweets I screencapped but there are other tweets in the thread and he said it in those
He said GPT-3 is more sentient than chickens, and that neither are sentient? That doesn't make any sense.
Yes, that's the point. Yud here is saying non-rigorous things from a position of ignorance and incuriosity under the banner of Rationalism. This kind of silliness (and the large amount of right wing extremism this is a thin veil for) is the raison d'être of SneerClub.
It is crazy to me how hard a time people are having parsing this basic logic.
for someone who doesn't care, you sure are working hard here
i guess he is forced to support anything which remotely lends credence to his scam.
Of course, he is one of the main thought leaders, imagine what would happen if he went 'you know I have been been doing some work, and I have come to the conclusion that AGI is physically impossible'. "I felt a great disturbance in our Lightcone, as if millions of tortured copied nerd minds suddenly no longer cried out in terror and were suddenly silenced by non existence. I fear something terrible has happened."

My actual position is here: https://rationalconspiracy.com/2015/12/16/a-debate-on-animal-consciousness/ And it leads me to be slightly more worried about GPT-3 than chickens, though it’s still a pretty slight worry (or I’d be speaking up louder).

Chickens: *mentally model themselves and their place in the environment as a necessary consequent of problem-solving in the world*

GPT: *pattern-matches blocks of text as a linear input -> output problem without any embodied cognition or long-term self-model*

EY: Yes, GPT is obviously the better candidate for sentience here. I am very smart and good at theory of mind.

What the fuck?

This is exactly why we shouldn’t worry about AI, but we should worry about the people who work in AI.

To be fair to AI researchers - Yudkowsky doesn't actually work in AI.

yeah a computer program that generates words is definitely more sentient than an animal with a brain and a complex behavioral repertoire. sure yud

also i hate the uniquely rationalist arguing style where they combine smugness with babytalk. like totally, nope nope.
I randomly remembered this scene from the good place: https://www.youtube.com/watch?v=etJ6RmMPGko seemed relevant.

fuckers!!!! let me have my chicky nuggies without guilt, it’s the only thing i have LEFT!

seitan nuggies
that reminds me, next time i order something from the internet, i should also get some wheat gluten. i've wanted to make seitan for a while now but this shit's impossible to find in local stores
you can also just wash the starch out of regular flour dough yourself, though that's a bit of a pain and limits what you can do with the result also if you have any stores that sell bulk baking supplies of any kind, you can probably ask them to order you a bag (and you probably won't have to pay shipping). I get 25lb bags this way at winco, which lasts about six months of eating seitan constantly.
no way i'm doing that. i looked and it's less than 5$ for 700g of the stuff right now
also, here's a good staple seitan recipe [https://itdoesnttastelikechicken.com/vegan-seitan-tenders/](https://itdoesnttastelikechicken.com/vegan-seitan-tenders/) hopefully soft tofu is something that's available to you \^\^;;
since I edited in advice for how to get it without noticing that you replied this looks a little awkward now soz
all good. thanks for the bulk buy advice, i might look into that
Well if you disregard animal suffering, nuggies are prob the least bad meat you can eat, on par with cheese (cow meat otoh) environmental damage wise. I think it prob is even better if you buy the cheapest nuggies as not to sponsor more consumption (I still cant get over the fact that meat is cheaper here than vegetarian burgers).
chicken products, including eggs, are all of the most suffering-dense common products. we treat birds horribly, and it takes so many of them... the average consumer tortures animals for about ten years every year, mostly chickens
i don't think it makes sense to separate environmental impact of animal products along the lines of what type of product that is. the industry that produces cheese is the same industry that produces cow flesh and the ways in which your dollar is split when you buy either of those is not transparent. even if we're putting ethics outside the parentheses, taking time to educate yourself on the environmental impact of products and research the least worst thing to buy is kinda pathetic imo. i wouldn't give any credit to that kind of person

The “snakes are sentient” thing in HPMOR was always one of the weirdest things to me, and before I knew anything about EY I just assumed he was confused about the meaning of the word “sentient” and was using it to mean human-level intelligence. But apparently no, he literally thinks (at least some) animals have less awareness and ability to suffer than what amounts to predictive text??

I just want to tell the world the url he provides with a discussion about his view on animal consciousness almost had me crying from frustration. I had to take a break after only a couple of paragraphs when it dawned on me that what I was scrolling through was 1. a debate; where this excruciatingly silly argument about pigs could later be put under scrutiny - presenting the risk of him doubling down on it with even more silliness and 2. could include follow up arguments from even sillier minds of his following.

So I took out the trash, made and ate some food and really thunk about the first round of arguments from Eliezer, about how according to him the pig probably doesn’t have sentience and can’t feel pain in any morally meaningful way because they don’t have reflective self-awareness, and how it all boils down to;

If you can’t use the VAS scale (a basic 1-10 questionnaire) for pain, you’re not actually in pain - you’re just automata. The value of non-sentient sensory reaction is equal in comparison to the reaction that occurs from putting a droplet of water in acid, it might make a lot of fuzz and is uncomfortable if not scary to be in the immediate vicinity of, but if you wear protection, look away or leave and ignore it, it’s probably over in a matter of some tens of seconds or a minute at most.

In a later comment about ethical consumption of animal products he describes the important threshold an animal species transcends when passing the mirror test: which tests for a mind’s basic understanding when a reflection in a mirror belongs to the body the mind is operating in. According to Eliezer: if a pig is caught practicing winking their dashing eyelashes to itself in the mirror to practice for real world flirting, you could start to consider the possibility that its mind harbors an “inner listener”, a someone or something intended to experience the pain/feeling/sensation that the biological apparatus emits.

When I slammed some toes into a door threshold yesterday it hurt for a couple of minutes, but today i can’t reify and reflect on the intensity of the pain other than it stopped me from performing my task of gift wrapping presents for a while and that it for a brief moment made me bunny hop around making a big fuzz to call for the attention of the other people (both of them med school students) located in the kitchen. But guess what? I didn’t get much of a reaction since they were busy cooking for 17 people while deep in a conversation in a pretty noisy room - a reasonable priority with regards to me being a 30+ guy with no urgent injuries/matters for them to tend to. And I didn’t feel offended by the fact that they experienced and consciously or not compared my sudden outburst of moaning noises to a droplet of water in acid, because…

I don’t think I’ve ever regarded my ability to label or name a sensation (other than to convey it to someone else) as the basis for actual experience with my “inner listener”.

So generelly reflective self-awareness seems to me like a secondary system: the first pure “animal” sensation is what I’ve been conditioned all my life to avoid. Think of chili, capsaicin - it produces the sensation of heat and it can be seriously terrifying, but in the endthe heat is a lie. The pain and panic from accidentally ingesting a too potent amount of chili have at one point in my life had me rolling on the floor just trying my best to endure it, while all the time trying to invoke…

The second order function of parsing, filtering, selecting/focusing and when needed re-evaluating sensations for a productive/pleasurable outcome from a given situation.

In OP Eliezer makes a tweet mentioning him writing the preface/intro for a Chalmers book about consciousness, and I’m looking forward to his first preface for a best selling mindfulness/CBT:

“Meditate on these questions: who’s reading this? Who’s listening? Are you?

If this is the first time you come across mindfulness, chances are you have never(!) actually in a meaningful way felt anything and to you i want to give a special good luck wish on your path to sentience.

Continuing on his reasoning and obviously putting words in his mouth for dramatic effect, the more developed/complex/iNtElLiGeNt a mind is, the higher value its experiences and sensations should have. #EffectiveAltruism. And I would guess the opposite holds true according to his former position. Let’s make it crystal clear that a toddler’s feelings doesn’t matter, because a baby is basically on par with pigs until the first time their own mirror image puts a smile on their face, at which moment they enjoy the same moral privilege as common Magpies, an obnoxiously loud, clever, street fighting and fast food thieving bird belonging to the crow family who does recognize their own beak in the mirror.

(Heck why are they making the distinction between sensation and experience at all in the first place? Can I - a sentient being, or my inner listener, or the thing that is playing my inner listener, have sensations without experiencing them? If one would write a book on how to be manipulative in relationships, one chapter could be devoted to “feeling and experiencing”. “Oh you felt that the vibe felt really off when I raised my voice? But how did you experience it? A feeling is more primitive, how would you describe the sensation after giving it some self-aware refection?”)

I’m not done ranting, but i really need a break. We haven’t arrived at the chickens yet.

thank you for your service in staring into this abyss

How could you possibly say that the AI I just wrote is less sentient than a chicken?

print("Please don't kill me.")
(nods) Ah, yes, Sentience 3. Without the parentheses, that would just be Sentience 2.7.

A bird thinks and feels infinitely closer to Greater Yud than the fantasy robot, but the fantasy robot lacks the weaknesses he really has and has the strengths he thinks he does.

Lol “expert”

I’d say, though, that his “writing, which is so torturous and convoluted, that I wonder what people who worry about basilikis would say, to explain how Eliezer Yudkowsky is truly sentient.

At first, I thought these posts were written by a.i.

He should read I have no voice and I have to scream.

The idea that most animal suffering is not actually suffering, is alien and shocking to me, but it’s an idea with a history. Descartes apparently held this opinion, and acted on it, on the grounds that animals don’t have souls and you need a soul to feel anything. The significance that Eliezer gives to “self-modeling”, reminds me of apperception in Kant - the faculty that ties perceptions together into a single experience of self-awareness - except that here it is some concrete computational process. I don’t know if he thinks that pain qualia are possible without a self, but only become morally significant when there are self metaqualia to feel them, or if he thinks that pain only exists when there’s a self, or what he thinks.

Meanwhile, I’m curious about the views regarding consciousness and AI that exist in this subreddit. I’m not expecting a consensus, but I do wonder what opinions exist here. I long ago gravitated towards a kind of quantum mind theory in order to make sense of the relationship between matter and consciousness, a view that implies classical computers are unconscious as a rule, but this is a very radical view that is still purely philosophical and lacks empirical evidence (e.g. proof that there’s cognitively relevant entanglement in the brain).

I am curious how people who don’t have an argument like that, decide whether or not today’s giant computer programs could be conscious.

The only place we've ever found consciousness is the human brain (via direct personal experience, plus overwhelming anecdotal evidence from others). Animal brains strongly resemble human brains physiologically, animal behaviour often resembles human behaviour, and we can trace a shared lineage between humans and animals; in other words, humans are animals. Other than physical size and the ability to vouch for itself, there are few obvious differences between a human CNS and a dog's CNS, and there are many similarities. If it looks like a duck and quacks like a duck, you might struggle to argue that it isn't a duck (especially if you go on to argue that torturing it to death would be morally acceptable). Computer programs have no physical resemblance to a human brain, and any behavioural resemblance is undermined by the fact that we put it there deliberately. That's all we have to go on. Pretty convincing evidence for animal consciousness and against machine consciousness, but not watertight, obviously. We don't know anything about how consciousness actually works (to the point that people keep hypothesising things like P-zombies), and we should be careful not to fill in the gaps by making something up. In particular, we should be cautious of motivated reasoning, via channels like "chicken meat is delicious" or "animal welfare is inconvenient" or "it would be nice to think that humans are special". I think this is why people get so fixated on things like the mirror test and self-awareness, constantly trying to link them to consciousness for no particular reason. They're some of the very few differences between human minds and animal minds, so people are motivated to believe that this difference makes humans greater and animals lesser, rather than just being an inconsequential psychological quirk.
> Meanwhile, I'm curious about the views regarding consciousness and AI that exist in this subreddit. I'm not expecting a consensus, but I do wonder what opinions exist here. I long ago gravitated towards a kind of quantum mind theory in order to make sense of the relationship between matter and consciousness, a view that implies classical computers are unconscious as a rule, but this is a very radical view that is still purely philosophical and lacks empirical evidence (e.g. proof that there's cognitively relevant entanglement in the brain). Well, I guess I don't particularly think that quantum behavior must be what gives rise to consciousness, or that computers couldn't be used to run conscious life in principle. Rather, it's just obvious to me at this point that Good Old Fashioned AI (i.e. algorithmic state machine intelligence) is prohibitively difficult to create in practice.

Chicken is delicious. AI is not. Simple.

Silicon chips are pretty good.

I’ve never been to this sub, but wow what a silly way to deliberately misinterpret the tweet. He’s simply showing the absurdity of claiming to know anything about the sentience of another being.

He also takes his own strong stance on the non-sentience of chickens though
Oh, well that's lore I'm unfamiliar with. I don't know who this guy is. This just surfaced on my feed.
you don't even have to know the lore, just have basic reading comprehension
God what an obnoxious sub.
you're the one getting whiny about your own failure to understand a tweet
Well, you probably also aren't aware of the whole context of Yud then. He isn't just showing the absurdity of claiming that. (I see you are a fan of destiny, which is a bit of a bad sign tbh, the type of 'logic' he and that community uses is questionable) This is the thought leader of a cult incubator movement, whos main worry is AGI (artificial general intelligence) taking over. (Which they have done 0% realistically to prevent, apart from playing a lot of wordgames, creating endless blog posts, and trying to reinvent philosophy (and a few other fields) from scratch, they are also allergic to any STEM expert who disagrees with them/any non-STEM expert). And this cult incubator also created the effective altruism movement, a movement which doesn't seem to be either effective, nor altruistic in the general sense of what we mean by altruism (Making more money is one of the plans of EA, or at least one of the plans they actually have people for, for the rest of the plans they are looking for people to make the plans, execute the plans, and evaluate the plans (also they don't believe in climate change being an existential threat, because it isn't a risk to the 10\^58 potential humans in the universe)). The movement also has a lot of hidden neonazis and open (and hidden) neoreactionaries. This isn't just a random dude giving a random opinion. And anyway, if this guy where to 'He's simply showing the absurdity of claiming to know anything about the sentience of another being' it would be way way worse. This context should make it obvious. (And if it doesn't don't worry, the Basilisk will just use your non-sentient atoms for its own needs).
Wow Soy, great concise summary I am gonna screenshot this anytime my girlfriend asks me to explain why I spend time on a sub reading about “rationalists”, because hitherto I have only been able to consider the vast void of my ire for them and how hard it would be to detail, and I just say “TRUST ME you don’t wanna know”
lol it's so bizarre when people look into other people's sub activity, let alone try to make some sort of point about it. i'm not reading all this.
And that is why I called out you being a destiny fan. E: it is funny in a way, destiny fans would love being Rationalists, if they weren't so allergic to reading and doing the homework of actually reading the Rationalist blogs.
i'm not reading it because you're clearly not worth my time if you think what subs i'm in has any bearing on anything. i've never been in this sub before and from what i can tell this place is just another echo chamber that kneejerk downvotes anything that goes against sub dogma. enjoy, dipshit.
I checked your post history because context is important. And you had a choice to learn, and you didn't take it all because I said you being a part of the destiny community is a bad sign, I didn't call you bad. (I wasn't wrong however 'you checked my subs so I will not listen to you' is a great example of bad destiny logic). Of all the shoes, you picked that one. If you were a themotte poster I would have called you a bullshitter and a troll for example, as then this information I posted would have been well known to you. You know, do 5 seconds of research. And you got downvotes because what you said was dumb, as explained above. E: looking more at your post history, your thin skin for being called out being a destiny fan is pretty ironic, but hey, congrats for not being a neo-nazi, you are doing better than most detractors of this sub. E2: and that is Mr dipshit to you.
Interminable existential skepticism. You can entertain the idea that no beings other than yourself are sentient (to little practical utility). But it is apparent that chickens meet the definition of sentience as well as humans do (which is to say, as well as anything can, to someone recreating Cartesian doubt because they just learned it in PHIL 101). There is therefore no reason to behave as if humans are sentient while behaving as if chickens are not.
I think it's pretty obvious to anyone who has ever dealt with a computer program or with an animal that computer programs are just things and animals are sentient. That Yud thinks that neither are sentient is very sneerworthy.

Local “rationalist” mistaking sentience for sapience once again