r/SneerClub archives
newest
bestest
longest
apologies if this is inappropriate here but I wrote an article showing how, among AI experts, Eliezer Yudkowsky is literally mid (https://alfredmacdonald.substack.com/p/do-ai-experts)
68

the key points:

  • the AI experts surveyed seem to think AI is not an existential threat, and with the exception of R. Yampolskiy the higher the expert’s citations / h-index the less they thought of AI as a serious threat.

  • Yudkowsky, when evaluated against other researchers, is about at the level of a mid-tier professor. this shows associate professors had an h-index of 6-10 and full professors 12-24, so Eliezer at 13 is right on the boundary of associate/full, and 8th out of 11 on this list in terms of citations.

  • none of the AI experts surveyed think anything remotely as strong as Eliezer’s doomsaying. Yampolskiy, the strongest “yes it is an existential threat” of the experts, thinks that “Artificial Intelligence (AI) - and its more advanced version, Artificial Super Intelligence (ASI) – [can] never be fully controlled” and that “equilibrium is our best chance to protect our species.” while pessimistic, this is still far from Eliezer’s much more extreme position.

  • based what appears to be moderately strong expert consensus, Eliezer’s AI views are fringe and we should probably treat them that way.

  • UPDATE: Bundy’s “Smart Machines are Not a Threat to Humanity” was behind a paywall, but his university hosts the full article on the website. I have interpreted this as okay to excerpt large relevant parts of his paper, and they’re now added.

so this is kinda interesting: when I looked up a lot of these guys on youtube, I figured they'd have been all over lex fridman since he himself is an AI scholar. however, unless I missed an episode, none of these guys really go on podcasts. the top 4 researchers have largely ignored the podcast circuit, yet they have h-indexes in the nobel zone. (35 to 70.) the result of this is probably what I don't want it to be, but know in my heart is: the disparity in who is listening to whom on this topic is a result of twitter and podcast appearances.
H index is very field dependent, the guidelines you cite are for medical researchers.
Is there a single field where the experts waste their time all over podcasts and twitter? If a scientist is constantly making youtubes or something I just assume that they're a crank.
There are podcasts hosted by professors on which actual professors do make appearances. My favourite one is "My Favourite Theorem" on which even seasonsed experts like Ken Ribet and Joel David Hamkins made appearance.
Do you consider Robert Sapolsky a crank ?
It aint just this field lol!
I think this is because not only Lex is a bad interviewer, but his expertise seems overblown as well. Going on his google scholar page, most of his well-cited papers were done in a team, which isn't a bad thing but doesn't speak to an individual expertise on its own. He has one 50+ cited paper and that was onlu published on arxiv. Moreover, even when experts appear at his channel, a lot of his questions are so stupid, it makes me doubt the validity of his PhD.
> Going on his google scholar page, most of his well-cited papers were done in a team, which isn't a bad thing but doesn't speak to an individual expertise on its own. He has one 50+ cited paper and that was onlu published on arxiv. Moreover, even when experts appear at his channel, a lot of his questions are so stupid, it makes me doubt the validity of his PhD. I think that's going a bit too far in the skepticism.
I feel warranted in my skepticism when a CS PhD invites a mathematician on his podcast and [poses the question](https://youtu.be/U_lKUK2MCsg?t=1595), "Do you believe in infinity?" and then goes onto debate the demerits of abstraction in math. This is something I expect from kids who just learned some counterintuitive fact about infinities, not PhDs in STEM.
On the other hand, I have a lower (more accurate) opinion of PhDs than you, and, after all, he's in CS.
Yes, CS people do deal with infinities as well. Any claim about complexity of an algorithm is defined in terms of big O notation which make use of infinities. This is something taught in an undergrad algorithms class.
I was watching from that timestamp and, shortly after, Lex really said that working in multiple dimensions (assuming he meant 4+) it feels *dangerous* because we can't visualize or intuit them. This is coming from an AI researcher?! I'm not very knowledgeable on the topic, but isn't linear algebra the foundation of, or at least integral to, most, if not all, AI & machine learning? What a bizarre man.
You are completely right. Hell, multiple dimensions means nothing in mathematics; much of "beyond undergrad" mathematics deals with infinite dimensions. Even in machine-learning, the space in which the input is stored is usually very high-dimensional (for example, when you are storing the RGB values of pixels in an image).
Counterpoint: have you ever considered that matrices are scary? Checkmate.
You should watch the first 10-15 minutes of his conversation with Brian Kernighan if you think this is unfair criticism.
Let me put it this way: I know a lot of dumbass PhDs.
Scratch dumbasses. I know instances of people that bought their PhDs. It basically goes like this. They have a good relationship with their supervisor and have connections/money to bring funding. Barely show up for their lab work, which gets mostly done by other postdocs or even from undergrad internships. Then they just write it or have someone write it (it's impossible to verify if they have written it or not for me) and voila. I know of at least two people that did this, from knowing the postdocs that did most of their lab work. Like, I am not gonna say Fridman did something like this, cause I think it's reaching, but this shit happens more often than ppl think.
["I'll write his thesis for him, but I'll be damned if I'm going to explain it to him."](https://mathoverflow.net/a/53136)
You also have diploma mills where you can flat out buy a PhD for writing total garbage, like the one where the Men are from Mars guy got his PhD. (which was shut down eventually for being a diploma mill, but he still touts himself as having a PhD)
Do you know many people with a PhD in AI development from MIT that are unable to grasp the basics of a time-sharing system, to such an extent that it derails the entire conversation with (one of) the inventor(s) of C? I'm significantly younger than Lex, with significantly fewer credentials, and even I could ask more enlightening questions. He puts the Wikipedia intro to the VTSS article up for literal minutes but apparently wasn't capable of reading it to realise that it just explained when and where it was made, and does nothing to explain what it is.
His PhD in electrical and computer engineering is from Drexel, he's in some kind of research position at MIT. I think that says more of his qualifications as an interviewer and maybe about what he knows outside his narrow speciality rather than whether his degree is real.
Derek Thompson wrote an Atlantic piece yesterday that quotes EY as an “AI expert.” It’s inexcusable for one of their more popular writers. Someone with a platform should call him out on that (looking at you, Gerard).
Jesus Christ tech journalism is in a pathetic state (was it ever good? Actual question for more relevantly educated people here). According to his Wikipedia page, Thompson is a bona fide charitable member of the EA movement, if that explains anything. Don't know how steeped he is in the transhumanist deep lore and whether that guides his appraisal of "AI experts."
I swear tech journalists somehow manage to have a worse understanding of technology than the average rando on the street.
Tech journalism had a decent run at things in the early aughts when it was about open source software and copyleft, but basically all of those things died with blogs and it's been a direct outlet for whatever copy a VC funded company looking for a PR guy wants.
He should only be cited as a Harry Potter fanfic expert.
They call it The Badlantic for a reason
Also, looking forward to reading some of your other pieces!
thank you! :)
> [Yampolskiy:] My view is that Artificial Intelligence (AI) - and its more advanced version, Artificial Super Intelligence (ASI) – could never be fully controlled. Jesus fucking christ. I think this kind of nonsense is part of the reason that normies give credence to Yudkowksy et al. Yampolskiy is almost certainly saying that stuff as marketing pitch and not because he really believes it in a literal sense, but the average person can't distinguish between the above quote and any of the stuff that Yudkowsky says. The basic concept of artificial superintelligence is not credible and that's really the crux of why Yudkowsky is full of shit. The robot god does not exist and Yampolskiy isn't doing anyone any favors by implying that it might.
>l superintelligence is not credible and that's really the crux of why Yudkowsky is full of shit. ?
How can it not be credible? Just this year we've had AI that seems to have solved language comprehension and has developed emergent properties the designers did not even mean for it to learn. It's already able to score higher than average on IQ tests. The entire field of art has been put on its head. Individual fields like Chess, Go, protein folding etc. Have been taken to superintelligence levels compared to humans. what exactly is not credible about superintelligence across multiple domains to the point where it could be called super AGI? Like I'd relaly like to know. I've been searching hard for strong arguments from experts for why this isn't something that could happen if not in a few years in a few decades? What would stop it? Can you link me to something?
Eh ChatGPT is impressive but it doesn't solve language comprehension. But yes your point is broadly correct: it is probably true that, for almost anything a human can do, a computer can (in principle) do it better. Some people call that "superintelligence" but I don't think that's reasonable; being smarter than a human isn't that impressive and it's not something we need to worry about. What a lot of people who use the term "superintelligent AI" seem to mean is "computers that are arbitrarily smart at everything", i.e. they can solve any problem arbitrarily quickly. That is certainly impossible. There are a lot of different ways to explain why, but the most concise one is that there are thermodynamic limits to the power of computation. It is not actually possible to be arbitrarily intelligent. A good place to start learning about this is Landauer's principle: [https://en.wikipedia.org/wiki/Landauer%27s\_principle](https://en.wikipedia.org/wiki/Landauer%27s_principle) As with anything, though, if you want to really understand this stuff then it'll take a lot of learning. Like, it's easy to ask "why can we not make a rocket fly faster than the speed of light?", but it's hard to understand the answer.
Thank you for pointing me in this direction. It sounds interesting. Is there a person who speaks publicly on this argument that I could listen to or read their blog or something? Like if there's an argument that makes it so computers *can't* solve any problem way quicker than humans that means its "Certianly impossible" then I assume people who understand this have made this argument to an AI alarmist at some point? Who would be making that argument? I would like to look up this person(s).
There are nuances here. I think it is probably true that computers can solve any problem much faster than humans. But they cannot solve any problem *arbitrarily* fast. This is similar to how any cheetah can run faster than any human, but cheetahs cannot run faster than the speed of light. AI alarmism is like saying that, because any cheetah can run faster than any human, there is a significant risk that cheetahs will learn to run faster than the speed of light and as a result people could get eaten by cheetahs at any time and in any location. The mere fact of being better than a human is not important on its own; additional context is needed. AI alarmists don't understand this, and as far as i can tell they don't really want to. It's like telling a Christian that the Rapture isn't real and that there will be no second coming of Jesus. I don't know of any blog posts about this, although I assume they exist. People who know about this stuff think it's obvious that AI alarmism is both overblown and misdirected, it wouldn't occur to most of them to write essays about why AI apocalypse eschatology is wrong.
Thank you that's a great analogy and before I read your reply I actually did a bit of research (ironically with chatgpt) on the topic and it helped me to understand the basics so I think I have an intuitive understanding now that's at least on the right track. Unless it was hallucinating some bs when it was explaining that is ;) But thank you, genuinely something I'd not heard before and makes a lot of sense to me :)
Argumentum ad populum .

Yudkowsky, when evaluated against other researchers, is about at the level of a mid-tier professor

This is unfortunate, he doesn’t deserve this high of a ranking. His publications shouldn’t be considered to meet academic standards. Hopefully most citations are just “uh, this guy exists so we have to include his stuff in a lit review…”

Or circular citations from the MIRI/Bostrom/etc crew.
Or papers where other people did the heavy lifting and he just has his name on it.
Citation rankings can be odd. Seems like for EY it's driven by two papers, one from 2008 seems often cited in the sende you mean. Another one co-written with Bostrom seems actually a bit interesting and broader than the usual Arguments from them, so it may be a bit more "groundbreaking" but I'm not sure.
Whatever my opinion of his work, Bostrom is an actual philosopher doing actual academic work.
Yeah and at least the abstract seems interesting, even if one disagrees with it. Probably not worth 1000 citation interesting, also looks like non philosophers cite it in their lit review or to support an argument But either way both highly cited EY papers aren't technical AI stuff but rather some conceptual ones.
EY isn't a technical guy, he's a conceptual guy because he never did the work to get technical. He didn't do the work on the concepts either, of course.
Hahaha, thank you for bringing it in with the last sentence.
Also an associate professor is definitionally an expert in their subdiscipline. Whether that means they're actually right or any good...
this may be misguided but: 1. I was trying to err on the side of too charitable but more importantly 2. I'm extremely confident this crowd will get very offended by the label "mid-tier professor", which to me is just outrageous. so many people would love to have a mid-level — competent but not elite — professor as a dad or husband.
Where does OP get Yud's index?

Hey, did anyone consider that if AI awakens it might just wipe out people who whined about it the most and let everyone else be? One can hope. Wait, that exactly the basilisk bs. What the hell are these people doing then, if they actually believe it?

I havent read this yet but “mid” is such a great insult. If ya just go for shitty or low grade, people are less inclined to take insult. But calling someone “mid” will really fire them up if they have serious confidence issues.

Have now read it. Great little write up. I wish I had read these takes years ago. Way more coherent takes than anything I’ve seen from RatWorld
it's so good because if you've grown up with a shitty life, "mid" is better than anything you thought you would ever have. what's to get mad about? that you're not amazing? it's such a good litmus test for narcissists.
“You’re doing pretty ok!”

“He is — as I’ve described several times — equivalent to a mid-range professor; his work began at the Singularity Institute for Artificial Intelligence in 2000, and his h-index puts him right near the boundary of associate and full professor. By citations he is 8th place out of 11 here.”

Depressing to think Yud is equivalent to an associate professor; what the hell is he being cited for

I wonder what it drops to if you exclude citations from stuff published by MIRI or similar institutions.
Web of Science appears to show exactly two (2) papers: one with Bostrom from 2014 (where EY is second author) -- true, it has 234 citations, which is quite a lot! The other is from a paper in Dr Dobbs Journal, with zero citations.
The “as I have described several times” fucking kills me lol
I am not so sure about that -- I think it really depends on the size of the field. For a small field? Sure. But in the ones I am familiar with, an h-index of 20 or more is only moderate.

needs NSFW marker as a link to a sneer, but I’ve done that for you. nice one!

crap, I thought I clicked the NSFW button and even double-checked. sorry about that, and thank you for fixing it!
Double checking is just unchecking?

I liked reading your substack stuff about the Austin rationalists. You’re cool.

I also like your ideas about on-the-ground homeless outreach. My city has some grassroots orgs, including one I work for, that do more direct philanthropy instead of donating to bigger charities who have no transparency in where their money goes. I think if rationalists spent more time with people from different class backgrounds they would have a much more realistic view of the world.

❤️

yud might be mid but this sneer sure isn’t

:')

Okay but who is citing Yudkowsky outside of rationalist circles?

Honest question, if you released this research edited as a paper would this count as +1 to citation for all of them?

no because my name is Voldemorted to them (I am using a Harry Potter analogy because this is The Culture)

H-index is kind of a weird metric, esp at lower levels it’s sensitive to papers not getting correctly indexed. To use a random nobody as an example, semanticscholar says my H-index is 8 while google scholar says it’s 20.

Instead, lets have an AI give us a ranking:

Rank the following authors in order of AI expertise: A. Bundy, A. Chella, B. Nye, D. Pal, E. Hadoux, E. Yudkowsky, G. Montañez, M. O’Brien, R. Yampolskiy, S. Fahlman, Y. Wang.

  1. E. Yudkowsky
  2. B. Nye
  3. R. Yampolskiy
  4. S. Fahlman
  5. A. Chella
  6. Y. Wang
  7. M. O’Brien
  8. G. Montañez
  9. A. Bundy
  10. D. Pal
  11. E. Hadoux

Clearly the LLM understands acausal trade.

seriously though google scholar's publications list is so wonky and big.

Alfred Macdonald? Aren’t you that weird-ass antifeminist guy who was friends with Duncan Sabien?

experts considerably more versed in this research disagree with his assessment of AI as an existential threat

Well, a man will have trouble understanding something when his salary (grants, lack of regulation) depends on him not understanding it.

“mid”?

slang

Fahlman’s position has a problem: if he considers that pathogens design and nuclear risks entail existential risks for humanity, then he’d have to assess whether ML models’ progression and proliferation will have an impact on the aforementioned risks or not.

If they act as a catalyst for these risks, then “AI” (that is, ML models, no reference to an “AGI” here) would create an existential threat for humanity

By that argument smartphones, smart-chips, laptops, internet, books, science are all existential threats to humanity.
And numbers 2 pencils, as well...
That would depend on what the word "catalyst" would refer to/how much of an impact/causal relationship A has to have on B in order for A to be referred to as a catalyst of B. But, taking into account your point and the vagueness of my previous comment, we can drop the vague - in this context - word "catalyst" and replace it with : T the threshold of the impact a cause C would have to pass in order for the probability of one or both of the aforementioned risks to happen to have sufficiently increased to think that C is an existential risk in itself. The problem I see in the position of the researcher is that he would have to assess whether the progress and proliferation of ML models would be lower than T
You have formalised your position with unnecessary jargon without increasing the meaningful content of your position. But let's begin: >T the threshold of the impact a cause C would have to pass in order for the probability of one or both of the aforementioned risks to happen to have sufficiently increased to think that C is an existential risk in itself. First of all, it is a very very bad practice to invoke variables and notations that you aren't going to use more than once. It just makes one look like a nerd who doesn't know esoteric words to complicate his point so he, instead, resorts to pointless formalisations to obfuscate his point. Regardless, your formalisation is useless too. The real notion that you miss out on is the metric by which sufficient increase in probability of risk has to be judged. You have abbreviated the word "cause" and "threshold", invoked the vocabulary of probability, but haven't given a precise sense to the *sufficiency* required for a risk to be considered existential. Which is precisely what you said you would do by eliminating the vagueness of your claim. >The problem I see in the position of the researcher is that he would have to assess whether the progress and proliferation of ML models would be lower than T Abbreviation/formalisation ≠ clarity of thought. You have merely replaced the word "existential threat" with "T" and haven't introduced even a single bit of elaboration in this comment.
hell yeah brother this is why I'm subbed here
> First of all, it is a very very bad practice to invoke variables and notations that you aren't going to use more than once. It just makes one look like a nerd who doesn't know esoteric words to complicate his point so he, instead, resorts to pointless formalisations to obfuscate his point. Slay queen. I dont know why, but this scientist cosplay they perform irks me way more than anything else they do
>You have merely replaced the word "existential threat" with "T" No, T doesn't refer to "existential threat", it refers to "the threshold of the impact a cause C would have to pass in order for the probability of one or both of the aforementioned risks to happen to have sufficiently increased to think that C is an existential risk in itself". >First of all, it is a very very bad practice to invoke variables and notations that you aren't going to use more than once First of all, this is a constant and not a variable. Plus, its use is meant to keep using it in the context of a conversation, which we're currently doing, so I fail to understand your point. >with unnecessary jargon without increasing the meaningful content of your position The point of my previous answer was to address what your first comment entailed i.e. that the word "catalyst" entails a vagueness that motivated my second comment, which developed a different position ergo a different meaning. Now, you can think that that difference in meaning is meaningless, but that would be another point. > invoked the vocabulary of probability, but haven't given a precise sense to the sufficiency required for a risk to be considered existential I don't think that you understood my point from the beginning. I'm addressing what the researcher I'm referring to considers to be existential risks. I could be agnostic on that matter and still develop the aforementioned position
>No, T doesn't refer to "existential threat", it refers to "the threshold of the impact a cause C... I know what T refers to. My point is that you were supposed to show how exactly is AI existential threat and not science when they both aid nuclear programs. You sidestepped that point by introducing some arbitrary concept of threshold "T". Hence why I said what I said. >First of all, this is a constant and not a variable. Plus, its use is meant to keep using it in the context of a conversation, which we're currently doing, 😐... "variables and ***notations***". And no, let me spell it out in your terms: let C be a general conversation, S the set of symbols invoked in C, T be the mapping S → ℕ defined by the assignment s ↦ cardinality of instantiations of s in the string C. Then given C, if ∃s ∈ S: T(s) > 1 then the formalisation is useful. I am sure you were able to comprehend my point more accurately now. >that the word "catalyst" entails a vagueness that motivated my second comment, which developed a different position ergo a different meaning. You didn't develop anything. You just dropped abbreviations like T and C without showing why science isn't an existential threat whereas AI is despite the fact that science contributes arguably more to nuclear programs and pathogen design than AI does.
>I know what T refers to Then why say that it refers to something it wasn't referring to? I don't understand. >"variables and notations" My bad, I only read "variables". >And no, let me spell it out in your terms: let C be a general conversation, S the set of symbols invoked in C, T be the mapping S → ℕ defined by the assignment s ↦ cardinality of instantiations of 's' in the string C. Then given C, if ∃s ∈ S: T(s) > 1 then the formalisation is useless. There's at least one obvious symmetry breaker at play here: the constant T allows us to have a conversation without having to copy/paste what T refers to, which I think is useful. That's how variables/constants/various signs used as placeholders are used in analytic philosophy for instance. Now, you might consider analytic philosophers to be pretentious nerds, that I don't know. But, again, T allows its reuse besides 1 reuse in the first comment. >You didn't develop anything. You just dropped abbreviations like T and C without showing why science isn't an existential threat whereas AI is despite science contributes arguably more to nuclear programs and pathogen design than AI does. I wholeheartedly agree with the fact that if ML models pass T - if you still don't like the use, I can say "the threshold" -, then science considered as a whole passes the threshold. If I restated my argument and switched "catalyst" for the threshold, it was so as to make it less vague since in your first list there are things such as smartphones & laptops that might not pass the threshold even if ML models do. Whereas they could be considered as "catalyst" more easily, the word being, in this context, more vague. All that being stated, I'm not saying that Fahlman would, for sure, say that ML models do pass the threshold. My point is that I think that in order for him to state that he sees the two existential risks he mentions without saying that "AI" is an existential threat, he would have to presuppose that "AI" doesn't pass the threshold
>Then why say that it refers to something it wasn't referring to? I didn't say T referred to existential threat. The word "replaced" was used in the sense that you replaced the *point* of showing science isn't an existential threat with the *abbreviation* of T. >Now, you might consider analytic philosophers to be pretentious nerds, that I don't know. But, again, T allows its reuse besides 1 reuse in the first comment. This is precisely why I provided you the criteria, didn't I? Also, analytic philosophers aren't at all as fond of notations as much as rationalists are. Read Kant's Prolegomena; you won't find any abbreviation or notations. >If I restated my argument and switched "catalyst" for the threshold, it was so as to make it less vague since in your first list there are things such as smartphones & laptops that might not pass the threshold even if ML models do. That is precisely my point as well! I am glad you agreed. All you did in your second comment was just replace the word catalyst with a weak attempt at a formalisation of the concerned threshold. And I say weak because all that formalisation consisted was of abbreviation of two words and a shoehorning of the term probability. This is unproductive for two reasons: Why are we formalising something that doesn't need formalisation in the context of our conversation, seems like an L (where L is an abbreviation for loss of time). Secondly, how do you know a T exists?
>I didn't say T referred to existential threat. The word "replaced" was used in the sense that you replaced the point of showing science isn't an existential threat with the abbreviation of T. What you said was: >You have merely replaced the word "existential threat" with "T" I took it to mean that I replaced the expression "existential threat" with a letter, "T". Which means that the letter refers to the expression that was previously used. Which means that you said that T refers to "existential threat". I sincerely don't understand why I should have taken it to mean something else. >Also, analytic philosophers aren't at all fond of notations as much as rationalists are I might be mistaken but you seem to think that I'm a "rationalist" as that word is used to refer to the LW community? I'm not, I'm critical of that community, which is mostly unscientific. As for your empirical claim on the use of notations in analytic philosophy, I don't have a solid idea about the extent of the use of notations among rats. But you're addressing what I said, which is that notations are commonly used in analytic philosophy. If your position is that analytic philosophers don't commonly use notations in their papers, you can browse the Stanford Encycopledia of Philosophy, browse Philpapers and see that it's false. Some examples: [https://plato.stanford.edu/entries/repugnant-conclusion/](https://plato.stanford.edu/entries/repugnant-conclusion/) [https://plato.stanford.edu/entries/propositions-structured/](https://plato.stanford.edu/entries/propositions-structured/) [https://plato.stanford.edu/entries/chance-randomness/](https://plato.stanford.edu/entries/chance-randomness/) >All you did in your second comment was just replace the word catalyst with a weak attempt at a formalisation of the concerned threshold. Again, the point of the threshold is that: >I restated my argument and switched "catalyst" for the threshold, so as to make it less vague since in your first list there are things such as smartphones & laptops that might not pass the threshold even if ML models do. Whereas they could be considered as "catalyst" more easily, the word being, in this context, more vague. So, no, it wasn't "just" for the aim that you mentioned. >Secondly, how do you know a T exists? If a T doesn't exist, then it means that, given the two existential risks Fahlman mentions, there would be no threshold of the impact of any given cause on one or both of the risks that could be passed to consider a cause to be an existential risk in itself. What that would entail is that even if one cause were to increase the probability of one or both of the risks up to 1, it would never be considered to be an existential risk in itself because of that impact
> I sincerely don't understand why I should have taken it to mean something else. If an interpretation of a statement seems to be blatantly false, then, most likely, it is not the intended interpretation. >If your position is that analytic philosophers don't commonly use notations in their papers This is not my position. >What that would entail is that even if one cause were to increase the probability of one or both of the risks up to 1, it would never be considered to be an existential risk in itself because of that impact Ughhh again with the probability. Why not just use the word "certainly" or if you wanna pretend to be a probabilist, say "almost surely" instead of "raise probability to 1"? I have zero idea what you mean in this paragraph. What is "it", "in itself", and "impact" here? Since you like formalisation, can you give me a deductive proof for the entailment that follows from the premise "T doesn't exist"?
Lol, is this a parody rationalist response?
I gotta say, I really hate that you guys cosplay as scientists. Either talk like a regular person, or have the rigour of an actual scientist. This reads like kid attempting to sound like grown up.

Tell me about Roko and this nasa thing…is that a launch loop?

In my view one of the most likely ways that ML models could cause a disaster isn’t because they’re too smart but because they could be too stupid in specific ways. We’ve already seen people using LLMs to write articles and emails for them, and even design code - so how long until someone takes a system design from an ML model for something critical (I dunno, a missile defense system?) and the design is flawed but no one checked it because they assumed the AI must have been right about it? Or perhaps a major bridge or something that ends up collapsing with thousands of people on it? I think people are a little more willing than they should be to trust the output of ML models.