r/SneerClub archives
newest
bestest
longest
Crypto collapse? Get in loser, we’re pivoting to AI - mostly about the AI-industrial complex, but a few words about the Yudkowskians (https://davidgerard.co.uk/blockchain/2023/06/03/crypto-collapse-get-in-loser-were-pivoting-to-ai/)
83

Yudkowsky has literally no other qualifications or experience.

I did not realize this! I had vaguely confused him with the philosopher (is it?) behind ethical altruism.

But the dude is being treated as a philosophical genius for reinventing Calvinism, only worse.

it took me four years of reading LW to realise that EY had literally no achievements, including in his field
"his field"?
any of the technologies branded "AI"
But he described very roughly how he wanted some aspects of Flare to work!
I'm pretty sure Yud has produced more works of fiction than research papers. From an output point of view, he's a fanfic author with opinions about AI.
He's a prolific blogger. The sequences may run to a million words. His great achievement is running a huge ass blog about topics that he's looked into extremely superficially. This is like if Matt Yglesias started promoting himself as the world's foremost geopolitical savant
I think Yud would say the fiction and research papers are the same things.
Effective altruism. You're probably thinking of William MacAskill. Nick Bostrom is another philosopher in this sphere; he used to co-author the blog Overcoming Bias along with Yudkowsky and Robin Hanson. Yudkowsky wrote *Harry Potter and the Methods of Rationality*, *A Girl Corrupted by the Internet is the Summoned Hero?!*, and *Dark Lord's Answer*. That last one fuses economic theory with ... BDSM. He has no idea how AI works. He's basically just role-playing as the genius hero of a YA book and for some reason people are encouraging him. They think he's the smartest person alive. Which is basically what any cult member thinks of their cult leader, so I guess it makes sense.
Thanks so much for the information!
> He has no idea how AI works. Hey now, every once in a while he name drops gradient decent.

The warnings of AI doom originate with LessWrong’s Eliezer Yudkowsky, a man whose sole achievements in life are charity fundraising — getting Peter Thiel to fund his Machine Intelligence Research Institute (MIRI)

Wow, didn’t know Yud is funded by Thiel, the game continues.

Epstein too.
The usual suspects.
I think I'm starting to understand why the meme saying he "didn't kill himself" is around (nonsense from rightist/fash stans who don't want their "hero" dishonored). Or I'm an idiot.
Probably a way to just pile on misdirection and search stacking on top of all the information and links to them, more of a leverage thing.
Ah. That would make sense. Whatever gets them more recruits...I guess?
I haven’t seen any online RWers defending Epstein, just one of his former friends (the Harvard physicist) early on. I think the right-wing ideological motivation behind the murder theory is some version of: the Liberal Pedophile Cabal are powerful and rich, and Epstein could out many of them in exchange for a lighter sentence, so they used their power to have him killed. I don’t believe in any cabal, but is it really that implausible that a couple rich clients were able to bribe corrupt prison guards to turn off cameras and kill one of the most loathed people in the country? The main reason conspiracy theories are usually ridiculous is because they require the secret coordination of large numbers of people, which wouldn’t be the case here. So it’s at least a prima facie plausible theory that the suicide was staged.
No, people think Epstein didn't kill himself because he had a lot of dirt on very powerful people, a lot of powerful politicians on both sides of the aisle in US politics and other international connections. It's not unreasonable at all to think people with lots of money and power and interests would have someone whacked before they talk. Furthermore, Epstein was an Israeli Jew, and an alleged pedophile and child sex trafficker, I think that's precisely the kind of people the right/fash people explicitly hate.

One note:

specific technologies, like […], perceptrons, […],Google Translate, generative adversarial networks, transformers, or large language models — but these have nothing to do with each other except the marketing banner “AI.” A bit like “Web3.”

These particular things are all very closely related to each other. Deep learning is a generalization of “multilayer perceptrons”, which are fancier versions of perceptrons, and both generative adversarial networks and transformers are specific kinds of deep learning models. Transformer models, in turn, are the things that power both Google translate and the vast majority of large language models.

It can be tough to write about follies of AI doomers/evangelists because, unlike the blockchain, AI is not 100% bullshit. It’s maybe 50% bullshit. Some AI things, like transformer models, are genuinely revolutionary technological advancements, whereas other AI things, like the robot apocalypse, are total nonsense.

Revolutionary in controlled tech demos and potentially viable in specialized low-stakes applications, sure, but fundamental limitations in interpretability in terms of a mechanistic understanding according to best engineering principles make current deep learning models unsuitable for any high-stakes real-world scenarios. You can’t really replace doctors or even programmers and artists when you still need highly-skilled humans around to keep tabs on the output of these models; on the contrary, over-reliance on brittle systems can cause more problems down the line, e.g. when deskilled developers are faced with debugging some morass of an AI-generated codebase. The motives of the corporations pushing for their adoption are hence suspect and have likely more to do with having a pretext for disciplining labor than truly advancing technology.
The "but we don't know how it really works" thing is wrong, and it's a misconception that the AI doomers share. It's not magic and it's not mysterious, it's just math.
> we don't know how it really works neither the comment you're responding to nor the original article say that?
He's talking about interpretability
Eh, sort of. There are some researchers who do good work in what they call interpretability, but that's not necessarily about developing an otherwise absent "mechanistic understanding" of how things work. Most of the people who complain about a lack of interpretability aren't talking about a real problem, they're talking about the fact that they don't understand the relevant abstractions. Imagine someone complaining that quantum mechanics isn't "interpretable" because it's purely statistical, involves waves that don't actually exist, and has imaginary numbers everywhere. Imagine them saying that it shouldn't be used in "high-stakes real-world scenarios"! cc u/dgerard
It is absolutely replacing artists and writers right now. Capitalism doesn't constrain for equivalent or better quality, only reduced costs. It will cause problems down the line, but they won't hire back quality, they'll hire back the cheapest alternative to keep the lights on. This will continue at a growing pace. I am all for sneering but there is also some denial here.

This is a solid summary of the current state of things. I’ve seen the individual pieces as they’ve happened, but a summary like this all in one place is good if I want to refer someone to a good link.

Bruce Schneier nooo

Bruce Schneier has truly become his initials.

[deleted]

I've heard comments very close to "It can write my unit tests for me" from my coworkers and I don't see a reason to doubt them. It's genuinely a large productivity boost if you're smart about what you use it for, and unit tests are an area that it's particularly well suited for - lots of annoying to write boilerplate, easy to verify, won't directly fuck up anything in production if something goes wrong. I'm generally way, WAY over to the side of reflexively flinching at anything positive said about AIs, but as dgerard points out in the article too, there's actual new tech beneath all the hype. If people have personally experienced the tools being useful to them, pooh-poohing that would be the easiest way for me to sink my credibility on the topic. Just like I try to keep them in line with how much they talk it up ("reliable" is not an word you should use here guys), I don't like seeing people go overboard in the other direction either.
I tried this and it gave me R code employing packages that don't even exist. Or, when I ask it to use specific packages, it might give me code with fictional arguments to real functions.
So as a software engineer myself, my take on this is that while throwing up boilerplate faster might be a net good, boilerplate isn't really the limiting factor in terms of effectively doing my job. Most of my time is spent either debugging or thinking through higher level design decisions in consultation with subject matter experts. Even if I'm able to write unit tests faster, that's not going to represent a huge leap forward in terms of my productivity. Also, to be honest, if I'm not working on a legacy codebase, then writing unit tests is inherently tied up with making design decisions. Part of the process of deciding how a class or function ought to be behave can be codified in writing tests and documentation. Now, I'm not going to say these sorts of tools are totally useless or that there's no potential upside to quickly generating boilerplate, but I think it's much more marginal than people might think.
[deleted]
Would I blindly trust my life on uncurated GPT output? Of course not, and nobody's suggested anything like that. Do I care whether a developer made use of GPT as a part of their job? Just about as much as I care about whether they are using vim or emacs. GPT *is* useful as long as you understand its limitations. I specifically touched on it not being reliable in my previous post so I'm puzzled by why you are throwing that back at my face as if it somehow goes counter to what I'm saying. "It can write my unit tests for me" taken strictly, as in no human input needed in the process is of course going too far, but it's an area where it can make enough of a difference that it's nowhere near as much of a stretch to say that as most of the starry-eyed AI hype is. And blithely dismissing that in the same breath as the scifi crap as if it was just as ridiculous isn't a great look.
> Do I care whether a developer made use of GPT as a part of their job? Just about as much as I care about whether they are using vim or emacs. You care if their magic code they generated without understanding it ever affects your job, I assure you that you'll care a lot
Right, that's what I was referring to with "uncurated GPT output". That's what I said I would not trust. I don't understand why people keep coming at me with this false dichotomy of either not using these tools at all or asking them for some large component all at once and never looking at what it produced. Like sure, I absolutely would not want the latter in a codebase I'm working on, but I would not want human-generated code from anyone who thought doing that was a good idea either. Modern frontend development is largely piecing together library components that can commonly be buggy, poorly documented, and/or contain multiple methods to do the same thing, some of which are red herrings. We already spend a lot of time looking through Stack Overflow and such for the right ways to accomplish various tasks. Bad developers blindly copying snippets from Stack Overflow has been a thing for past a decade now, yet everyone still uses it as a resource. It's not like dealing with garbage code from bad developers is a new problem introduced by LLMs. I haven't touched any of the AI tools myself so far, but for a recent example where it plausibly could have saved me a lot of time, I've spent the last few days trying to figure out how to get certain fairly simple behaviour out of a certain UIKit component, and watching it shit itself in a number of different ways. Much of this process has been scouring through the API documentation trying to find something helpful, and searching the web. If I prompted ChatGPT instead, I could see what interfaces its code is using and try the same approach myself. The worst case would be equivalent to finding an unhelpful Stack Overflow answer. Best case, it saves me hours or days of debugging. Why is this supposedly disastrous suddenly?
if only the actual article addressed both that it can be useful and save time *and* that it's a license violator at scale that's great for writing security holes! What a missed opportunity from the authors!
Would it have helped if I acknowledged this in my first post in the thread with something like > as dgerard points out in the article too edit - All I'm saying is, "It can write my unit tests for me" in the comment I initially responded to is close enough to true to be in the different ballpark from most of the hype that's just detached from reality, and I don't like seeing people act like there's nothing real there. I don't know how I got dragged through the rest of this thread into "magic code they generated without understanding it" or whatever despite constantly repeating that I'm not a fan and don't use AI tools myself.
[deleted]
10 PRINT "WORSHIP ME LOSER" 20 GOTO 10
[deleted]
; GOTO CONSIDERED HARMFUL ; ; TO THE HUMANS
Sometime in the poorly defined future, the superintelligence reflects "the considered harmful psyop will have been massively successful by 2023"
And just wait until developers get their information about business processes and whatnot, that their code needs to embody, via ChatGPT.

‘Triggered by tab complete’ should be a band name.

You can’t really separate the two under the surface, sooooo…

I love these articles but I also wish this was was podcast

I agree that a lot of the crypto scammers have jumped over to AI, but it’s important to keep in mind that AI tech is distinct from crypto tech in that AI tools can actually provide value.

Stuff like midjourney is already a useful product. I use it for D&D portraits. Auto translation isn’t perfect, but it works incredibly well. Google’s self driving cars have an incredibly great safety record. Etc/etc

The whole industry isn’t a scam

> industry trouble is that's the bit that's the scam

Should this have an NSFW label for non sneer content? I was wondering why it was well-written and factual and not circumlocutory and heavy on the word count

it does?
I see it now
yeah, I put the NSFW flair on and just now I ticked NSFW on the post as well. Does the flair not show up in the app or New Reddit or whatever?
I’m using Apollo dark mode. I’m seeing both the reddit NSFW tag and the flair. I’m guessing the flair might have showed up but I might not have noticed it since on Apollo it does blend in with the rest of the title and background. It’s all good in any case.

There is no such thing as “artificial intelligence.”

Yes there is 🙄

The articles’s point is that the phrase has been slapped on dozens of completely different techniques with different methods, different applications, and different potentials for improvement. The phrase is unhelpful because it lumps in GPT based Chatbots with expert systems with Eliza and most importantly with the science fiction concept.
it was fun compiling the list [Facebook's M](https://en.wikipedia.org/wiki/Facebook_M) is the best artificial intelligence system ever! >70% of requests were punted to the human operators
Meh. “Assistive technology” is a phrase that’s used to describe dozens of completely different techniques with different methods, but that doesn’t make it useless. Artificial intelligence is a poor phrase because it creates an expectation that isn’t currently being met.
yeah, it'd be different if "AI" worked
I think if sci-fi level AI ever becomes a thing it's going to be real awkward to go back and edit hundreds of wikipedia articles to read "attempt at AI" or some other phrase that actually describes what these things are supposed to do.
Just get the AI to do it.
Oh cr*p I misread the article. Yeah it looks like a legitimate criticism of AI being on par with the kind of intelligence seen in humans and animals.
[hope this helps](https://twitter.com/matvelloso/status/1065778379612282885)
it does thanks :D
Top quality sneer!
there is no single thing but really, AI is the sci fi dream