r/SneerClub archives
newest
bestest
longest
62

this is off topic here. sneerclub is about a particular group of people, it’s not a general sub for discussion of the AI grift industry. please post better.

Bring on the bubble.

The real Basilisk was all the rugs pulled and bubbles bursted along the way 🥲

Are the economics of all this plainly laid out anywhere? Altman let slip last year that every GPT query costs “single-digit cents.” If we start building proactive AI into everything, a la Microsoft’s plan to turn GPT-4 into a new Clippy, who pays the extra buck incurred every time I open Outlook?

>turn GPT-4 into a new Clippy Da real paperclip optimizer
"It looks like you're trying to use some paperclips. Would you like help with that?"
Sure, I guess you could bring the cost down through by caching common queries (though in that case, why do you need the AI?) or by relying on scaling efficiencies or unrealized technical efficiencies or plain old Moore's Law. But now that the expectation has been set that a new and better GPT comes out every few years, what happens if increases in training\* and computation costs continue to outstrip efficiency gain? \*training numbers are extremely vague, but most estimates I've seen say GPT-3 cost seven figures to train, GPT-4 eight or nine
Moore's law kind of ran out. E.g. I have a laptop that is over 8 years old and it works perfectly fine and may even beat a lower end today laptop. I would probably have to spend close to inflation-adjusted cost of that laptop to get something better. Back when 8 years was like, what, 16x+ more transistors and if we look further back, it meant a lot higher clock speed as well.
if they could prune the model enough then on device would be an option
The recent gains through are all about having an extremely large number of parameters... if you prune it down to an end device then it'd probably just perform the same as much older language models. edit: i guess maybe a bunch of end user devices in parallel may support occasional query though, as long as the duty cycle on queries is pretty low.
So interestingly, the open source community has been taking the leaked Llama model and demonstrating how narrow, pruned, distilled models definitely can and will run on edge devices very soon. But crucial to all of that is that these models are much more narrow, focused on user fine tuned domains plus instruction modeling. They don't, just contain everything from the internet and expensive expert labelled rlhf. Ironically, the economics of specialized, small inference works better than giant ass generalist that thinks it is way smarter than it is. Sound familiar?
Apologies if I misread a personal tone into those last couple of paragraphs. But I would like to point out that my initial comment was limited to the economics of GPT alone. I am definitely a giant ass generalist, but I think if OpenAI's burn rate is half a billion, and Altman is already asking for 100 billion more, then my notion the GPT paradigm has a scaling problem may not be completely naive. Probably poorly expressed, but not baseless.
I am going to go ahead and guess that extra GPT features that are coming to Office 365 will require a new tier of subscription, which will make up for the cost of running all that inference, and then some.
> who pays the extra buck incurred every time I open Outlook? Microsoft, who has been backing OpenAI for years now and is giving them free Azure servers they might just eat the loss for a decade or so
The cost of a subscription you'll pay to run those few Clippy queries will vastly outweigh the operational cost.
Oh no cost of cumpute is high. Sure hope that doesn’t exponentially change over the next couple years
Open source AI LLMs can already be loaded onto flagship smartphones and process queries at a few tokens a second, no internet required. That might mean it takes a couple of minutes for a longer response, but it's not unfeasible. Also, last year was a decade ago in AI research. Nobody knows what might be possible a year from now.

As they don’t have a path to commercialization yet that isn’t that surprising (sure they are selling access, but doubt that could recoup the costs in a year, the amortization of that is prob a bit long term). I personally thought it would be more. Unless they mean they lost it in a “we don’t know where it is” sense. (That would be very cryptocurrency of them).

What is more sneerworthy is that he expects/wants to get 100 billion. That is like between 2.5 or 6 twitters. Or solve 16 world hungers.

Man it’s certainly laughable, but honestly at this point nothing surprises me. With the amount of, forgive the joke, chatter that chatgpt has generated, and recent crypto nonsense, I could see them raising that much.
Finally we found the answer to the Fermi paradox. Civilizations when they reach a certain size go all in on trying to build superintelligent AI, which when it turns out leaves the economy in shambles.
For the low low price of $15/mo you too can have equity in bringing about the robot god.
robot god NFTs
Queue dr evil: one hundred…five hundred billion dollars…
My robot god NFT exchange went down, and now my equity in the robot god is broken. How do I fix?
[penis](https://thenextweb.com/news/cryptocurrency-prodeum-scam-exit-penis)

It’s only been released for a pretty limited time and has hardly been commercialised, did anyone really expect it to be immediately profitable?

The sneer is that they're talking about developing AGI, not that they lost money
I think to the limited extent that Yudkowsky is generally intelligent, GPT4 is AGI. Well, OK, that's unfair to Yudkowsky, he does have an internal drive if it is only to try to make himself feel smug, while a language model doesn't.
Lots of startups lose money for a while. But half a billion dollars is a lot.
At this point is developing something they could call "AGI" even that outlandish?
“something they could call” is doing a lot of work in that sentence.
Yes.
There are already people saying gpt5 will be an AGI, i don’t think it’s a stretch to say that they could develop something that they label an AGI, regardless of wether it meets the expectations of most experts on what AGI would actually look like.
[deleted]
Okay, but rationalists and the people at OpenAI arent necessarily interchangeable. The latter has business interests. I find this akin to the whole quantum computer thing, there are a bunch of companies that were quick to say they have done it, when they didn’t, in fact. I was just pointing out that OpenAi could very easily declare “we made an AGI” in a few years without having actually done so.
Yeah this is what I was getting at. AGI is a pretty contested term. For some people it just means something that can accomplish any information based task better than the average human (which is a pretty low bar), while others think it means being some real time agent recursively improving itself. The former seems very doable with 100B in funding.
I am confident they can spin up some sort of recursively improving agent that falls far short of the outlandish things that they also say an AGI should be able to do. And they can probably do it for less than 100B
> something that can accomplish any information based task better than the average human This is so vague as to be entirely useless as a definition, which makes the funding estimate equally nonsensical, sorry.
Depends on if it's actually AGI or something that they can convince gullible investors is AGI. I do think actual AGI is _possible_, but almost certainly requires still-unknown breakthroughs in CS and I don't see a direct or obvious path to one from current tech yet. Meaning it's still quite a long ways away.
And to get past silicon, I don't think the latency in any achievable MCM, let alone mainframe would be good enough for real time cognition on that medium. And monolith chips are right out.
Reminds me of the conversation around the self-driving cars we were supposed to have 3 years ago. Edit: remember when Uber was investing in self-driving to hype their stock value?
Fair point.
For me the sneer is that this guy is on record saying that AI could spell doom for humanity. He has an end of the world bunker for goodness sake. And here he is trying to round up funding to develop AGI. It looks like snake oil to me. It's just as likely that Sam knows that AGI is either impossible, or incredibly improbable, especially in his lifetime and is just generating hype because there is literally no downside for him. Anyone who has ever been skeptical about musk and his 'men on Mars by x date' or 'brain computer interfaces by x date' or 'full self driving by x date' should be getting a lot of deja vu right about now.
It's transparently obvious that they're trying a new grift which is "ban my competitors because it's dangerous but allow me because I'm the one true saviour who knows what they're doing"
Have you considered the possibility that he’s a death cultist? I kid, but there is something very Lovecraftian about a couple of labs investing incredible amounts of capital to harness the sum total of everything humans have ever written to birth what half the scientists imagine to be a god.
Pretty lame god-in-a-box if it can't figure out a way to make money fast. Why hasn't it devised multiple streams of passive income for itself?

There must be at least “sparks” of profitability somewhere in those parameter matrices.

That’s 67.5 milli-SBFs!

Doesn’t really mean anything. Look at how Palantir is doing. I’ve read a pretty funny comment on that company recently and I believe it applies to OpenAI as well.

“It’s clear Jim Cramer is not a fan of Palantir Technologies Inc (NYSE:PLTR). He doesn’t even believe it’s a”real company.”

“Palantir seems to be a company that’s made up for memesters,” Cramer said on CNBC’s “Squawk Box. He complained about CEO Alex Karp’s use of foul language on conference calls and noted that if Karp”would stop dropping F-bombs,” the company would have a lot more legitimacy. ”

Sorkin seemed to be taken aback by his comments. He said, “So you don’t think that Palantir is a real company?”

“No,” Cramer said. “I think that Palantir is a series of press releases.”

Sorkin was surprised by his response. “Wow, fascinating,” he said. ”

Who the fuck knows what’s going on inside those companies. They may lose money but still US military is very dependant on them so they won’t ever bankrupt. Or they will I don’t know stock market is RNG anyway. I refuse to believe that it is any different than krypto.

There’s some similarities between AGI and fusion power. Both are incredibly expensive to develop, but whether it’s tens of billions of USD, or hundreds, or a trillion or more is hard to say until someone gets it working.

But the potential benefits (and dangers) are big enough that some well financed people are going to try. Eventually a group will succeed at one.

feel like maybe you don't know what sub you're on
Different guy but this sub just popped up in my feed. Is it just the antithesis of the AGI crowd?
Depends what you mean by “AGI crowd”. We don’t typically mock long term hopes carefully tempered by realistic near-term expectations and careful avoidance of overhyping existing tech. Apocalyptic visions of total doom or perfect salvation within 5-20 years, complemented by hyping every little incremental tech improvement as tittering on the cusp of starting a chain reaction of exponential recursive self-improvement? We totally mock that. Bonus points if there is bad math, bad philosophy, and/or bad evopsych mixed in. Our main sneer-target, Lesswrong, is tied to the think tank Machine Intelligence Research Institute (MIRI) which claims to be working on aligning AI to human values… but has the research output of a single highly productive post-doc/mediocre professor. It has produced a mix of game theory/decision theory, abstract mathematics, and philosophy output only tenuously connected to actual machine learning AI work, and none of it peer reviewed. Despite drawing in millions of donor money (Peter Thiel threw a lot of money at them before getting irritated at their AI fear-mongering) they ultimately haven’t achieved much. Lesswrong is a blog on rationality and AI. It has a “Sequence” of blog posts on thinking rationally, the good parts of which resummarize stiff like ET Jaynes work on probability or Kahneman’s “Thinking Fast, Thinking Slow”, the mediocre parts jumble together conventional ideas with novel terminology, and the bad parts have original ideas about evo-psych, Quantum physics, and reworking the scientific method to be more Bayesian. The ultimate conclusion of the blog posts is that AGI is the most important thing ever and you should donate to MIRI. Lesswrong got popular after their main contributor/leader wrote a Harry Potter fanfic.
Thanks for the summary! I'm well aware of the EA/Rationalist/Yud-didact community. The title of this post led me to believe this sub thinks a generalist is some sci fi fantasy.
For all intents and purposes it is. There's no reason to believe that even creating a perfect digital model of a human brain (something that LLMs are not) would be sufficient to simulate human intelligence. Human intelligence is an, as yet, unfathomably complex system of biological functions connected as much to the rest of the body as it is to the brain. Theres no reason to believe that human intelligence can be matched by anything less than a total biological reproduction, which is something we are already capable of, its just that, as it stands, our methods for doing so currently involve giving birth to and raising a child.
> There's no reason to believe that even creating a perfect digital model of a human brain (something that LLMs are not) would be sufficient to simulate human intelligence. Human intelligence is an, as yet, unfathomably complex system of biological functions connected as much to the rest of the body as it is to the brain. What about an ideal digital model of an entire human body, not just the brain? I think if you believe in the reductionist idea that the behavior of all physical systems is in principle (though usually not in practice) derivable from just the laws of physics and the right initial conditions, and you don't think the laws of physics involve any weird non-computable rules, then you kind of have to believe that a "generalist" AI (i.e. one with all human mental capabilities) is possible in principle. But of course you can still be confident that LLMs are not the way to achieve that, and that we might well be centuries away from achieving it by any means even if we continued to put a lot of work into it and civilization didn't collapse or anything.
No
Is there a fusion power will take over the world doom guy we are all missing out on? Does he/she write fantasy fanfiction? Is there a cult? E: [holy shit there is.](https://en.wikipedia.org/wiki/Fusion_Energy_Foundation) it is *pause for dramatic effect* [Lyndon LaRouche](https://en.wikipedia.org/wiki/Lyndon_LaRouche)!! E2: [And he even wrote fantasy fan fiction!](https://www.amazon.nl/Economics-No%C3%B6sphere-LaRouche-Successful-Forecaster/dp/1980307881)
That’s a good find… I suppose the lesson is that you can form a cult around any technology.
Yes, I was just amused it was LaRouche (and the he also has books on Reason etc, [and all kinds of weird technoshit](https://www.youtube.com/watch?v=c7r8FKVMRPY)).
lmao what are the dangers of fusion power? what are the benefits of agi?
Former: Human or engineering failures?
>Eventually a group will succeed at one. somehow, palpatine returned
We’ve achieved fusion in nuclear bombs and know exactly how it works in the sun. We can observe intelligence in humans, but we don’t have a complete definition of human intelligence, much less complete understanding of it, much less an idea of superintelligence. So the analogy falls apart, or at least illustrates why AGI is likely to be substantially harder.
The dangers of fusion power (hydrogen bombs) have already existed for decades. The only thing left to reap are the benefits.

Is this sub just an anti-AI circle jerk?

No
The mundane AI hype is getting a boost from AGI doomerism. For example it seems like OpenAI is using hints of the doomerism in their [ad copy disguised as technical reports](https://numbersallthewaydown.com/2023/04/06/gpt-4-technical-report-a-blog-post-masquerading-as-scientific-literature/) in order to heighten the hype, so yes, we will sneer at mundane reasonably successful AI efforts when they use AGI fantasies to draw investors.
I'd say this sub is very anti-(AI circle-jerk) if anything.
It definitely is. I joined thinking it was all targeted to the crazy doomers. But as someone who actively works in AI I can say we are indeed experiencing a massive capability jump in AI. People here are blindly ignoring
Massive capability jump =/= AGI. Have you considered that you might simply be succumbing to the hype? For years now people have speculated that the first jobs to go under AI would be things like data analysis and jobs that require high levels of this skill, the wisdom was that creative jobs would be safe for a long time. And then image generators came along and people were dazzled, the forecasts were wrong, actually the creative industries might be the first to topple. Well, as someone who actively works as an artist I can say that human creatives aren't going anywhere. Have you asked GPT to generate prose? Have you asked it to generate a piece of writing in the style of a notable writer? A fun task is to ask it to quote you a piece of writing by a notable writer and then ask it to generate a unique piece of prose in that style, the results are illuminating if you know what you're looking at. Peoples minds broke when image generation started becoming polished because images are flashy and because our society is awash with highly technically skilled visual artists to the point that, today, empty pictures are cheap. But mostly because it turned everything the public thought they knew about AI and it's capabilities on its head. But it will soon become clear that the things that people actually value about art, the reasons why a picture of totoro, or hell, even homer Simpson, carry more meaning than an AI generated image in the same style, is the comedy and or emotional value represented by these images through the medium in which they were applied. Now obviously some pieces of art carry more value than others and some pieces of art stand alone in their complex beauty, but the great works of art all have something in common and that is the weight of human experience behind the picture which led to it's creation and how that experience is communicated through the art to the viewer. That goes just as much for AI art as regular art and AI generated art is just a tool by which humans will express themselves in more complex ways. But, what I am getting at is that people thought that simply being able to draw pretty pictures was, in itself, something more than just a mechanical skill that can be learned by just about anyone with the time and inclination. The fact is that drawing is just the mechanical skill that facilitates the creation of Art. People are giving AI too much credit for being able to produce meaningless pretty pictures and are starting to claim that these pictures mean that AI is approaching human capabilities in other areas. In fact I would argue the opposite, it is simply exposing the overcommodification of simple technical skill for what it is, but to people without that skill it seems like magic. And that's before we even get into the specifics of how generative imagery is actually made and how the images that come out can only be as good as the images that go in. Ask yourself, why is it that AI can generate pretty pictures but can't come close to mimicking literary style beyond pulp, despite having as much data to draw upon in that area? Perhaps it's more to do with what constitutes good writing being vastly different and less easily reducible to an amalgamation of data points than aesthetically pleasing images
I agree with a lot of what you wrote, but at the same time I think you're underselling it's general capabilities. If I ask it to do something I'm better at than the average person, like writing some TypeScript code for a large project with typical requirements of one of my stories, it will most likely fail; but when it comes to shorter problems that fit within it's context window it's probably already on par for most subjects. Running with your challenge I asked it to for a quote and then to write in a similar prose and this was it's result. "Anyway, I keep picturing all these little kids playing some game in this big field of rye and all. Thousands of little kids, and nobody's around - nobody big, I mean - except me. And I'm standing on the edge of some crazy cliff. What I have to do, I have to catch everybody if they start to go over the cliff - I mean if they're running and they don't look where they're going I have to come out from somewhere and catch them. That's all I'd do all day. I'd just be the catcher in the rye and all" -J.D. Salinger "So, anyways, I'm just standing there on the corner of Main and 5th, watching all these people rushing by, going about their lives. It's like they're all in this big hurry to get somewhere, and nobody's really paying attention to what's going on around them. And then it hits me, you know? All these people, they've all got these little stories they're living, but nobody's really listening to them. Nobody's really paying attention.I start to think about all those little stories, and how they're all connected somehow. It's like this big web, you know? And each person's story is just one little thread in that web. And then I start to wonder, what happens when one of those threads gets cut? What happens when one of those stories ends too soon?" -GPT3.5 I'm not very good with literature but that's probably a better mimicry than I could come up with. It's flaws might seem obvious to somebody who knows what they're looking for but I had to run a ctrl + f in a Catcher in the Rye pdf to make sure it wasn't plagiarized.
The cracks start to show when you ask it to keep going. You'll find that the same shallow observational threads keep appearing. >I start to think about all those little stories, and how they're all connected somehow. It's like this big web, you know? I have actually seen this exact idea repeated in almost all of the imitations I have asked for and it's nothing close to to content of the above Salinger quote in terms of theme. It's a very shallow digression which has superficial meaning. Instagram quotes dressed up in Salinger clothing so shabby that it barely even approaches a lesser writer like Stephen King. It gets more obvious when you start asking it about certain specific situations. Ask it to describe, in whatever style you like, a trip to a fast food restaurant. Ask it again in a different style, and again. I'd be interested to see your findings. Then ask it to imitate the same writer in two different situations. A trip to a fast food restaurant and a car crash for instance. See how those compare in terms of underlying observations rather than simply surface level prose style. It seems clear to me that it has certain points associated with what literature is and it repeats them. Things like the hollowness of consumerism, the fundamental interconnectedness of all people, etc. It has access to plenty of writing, it can quote you specific writers no problem, but it can't even go so far as to draw upon certain themes specific to particular writers because it can't interpret the meaning behind the text, it can only predict what the next word should be based on the prompt and the preceding words. While we might not be able to write in the same style as Salinger, we could at least identify the ideas he deals with and come up with something that incorporates similar observations, even if it is just an ugly pastiche of his most obvious ideas it would at least be tenuously thematically linked. This is what I me a when I say that it exposes the commodified version of incredibly simple technical skills, the tools of art. It can't come up with original ideas, it might fool you into thinking it can the same way a person might be fooled into thinking another person is more intelligent than they are by simply telling them something unexpected that carries the veneer of depth on first glance
If it likes to talk about the fundamental interconnectedness of people and the hollowness of consumerism perhaps I shouldn't have given it a Catcher in the Rye quote xD. I'll have to try getting it to generate specific experiences in the style of a certain writer when I have more time that sounds interesting.
[The coming wave of ChatGPT created websites will be so much fun](https://www.youtube.com/watch?v=S6_AkuPgLjw) hacking like it is 1999.
How does one measure intelligence if not for capabilities? AGI is a loaded term. I think it’s hard to deny these models have general capabilities, if flawed. As does the average human. They are both superior and inferior to humans in different ways. I think the honest truth is we don’t know how ‘intelligent’ these systems are currently or capable of becoming in the future. I may be biased towards hype as of late, but I think claiming anything otherwise is dishonest
There's also the more prosaic factor that allowing direct copyright of AI outputs would end up destroying the copyright system by a nasty hybrid of DDOS and copyright trolling.
Nice! I don't work in AI but I've taken the fastai course and worked through Neural Networks from scratch so I think I have a better grasp of the BS claims both sides spout than most outsiders. The EA/EACC/Rationalist groups are easy targets but given what we've seen since GPT-2, it seems just as crazy to think that putting 100B in research won't result in something we could justifiably call "AGI".
[deleted]
I mean it's almost irresponsible levels of intellectual honesty to reword "3 frames" into "single video"
Yeah I mean I don't associate those kinds of extremist beliefs with the label AGI. I think only a very small percentage of people do.

That’s a lot of tokens they’ll have to charge for.