r/SneerClub archives
newest
bestest
longest
I really thought this was a brilliant satire at first (https://i.redd.it/stpzl80drgy91.png)
188

FINALLY art has been optimized. I’m not springing for those metrics that are expensive to compute tho

I think there were some historical times when some artists and critics unironically did believe art was a linear progression of improvement leading up to their era and all previous works were obsolete stepping stones

all of that kinda ended around the same time shit thoroughly hit the fan and challenged everyone’s assumptions about progress in far more domains than just art

it’s like these guys are slowly plodding through a History of Western Thought textbook (or just skimming a summary and guessing at the details) and haven’t gotten to the chapter on the 20th century yet

WW1 was a real deal breaker for all the people that thought that society always progresses in a better direction all of the time. It’s wild seeing all the pre war futurist art celebrating the car and the steam train just … stop dead with the war.
I took a really interesting class in school that integrated art and literature and history and such into one big story of the West, and they observed the same kinds of breakdown from traditional linear-progressive thought in a variety of fields, actually starting before the war: * in music, Schoenberg demolished standard diatonic harmony and put up his 12-tone system as an alternative in 1923, while the war also pushed Bartók off the edge into fully alternative tonalities, but Schoenberg had already brought atonality to the spotlight with his second string quartet in 1908, and Stravinsky's raucous premiere of The Rite of Spring was back in 1913 and in some views exemplified the decadence and implicit violence that led to the war * in science, Einstein had blown up everyone's simple Newtonian understanding of physics with special relativity in 1905, requiring them to consider every ~~perspective~~ frame of reference as equally valid despite seeming mutually contradictory, and between the wars quantum mechanics started exploring even deeper into the counterintuitive than Einstein dared to go * in literature, Virginia Woolf marked the beginning of modernism "on or about December 1910" * back in visual art, the Expressionists, fauvistes, Blue Riders, and cubists all started (and in some cases ended) their heydays before the war
>in music, Schoenberg demolished standard diatonic harmony and put up his 12-tone system as an alternative in 1923 I always felt his earlier pre-12-tone works were so much more compelling and beautiful, which (to be fair) is related to why he switched techniques. On the other hand, the mathematician and engineer in me was frustrated at how arbitrary serialism ended up being. If you're going to pick a handful of rules, why \*those\* specific rules? Not only were they ordinary and restrictive, but even by the simple metric of "will it prevent composers from cheating" they didn't work that well. I'm sorry for the serious reply I know this is Sneer Club.
And hey who doesn’t like get the party rockin with some solid 12-tone bangers, amiright? I played Schoenberg at full volume at the last party I was invited to about seven years ago. Man oh man it was off the chain!!!!
I think you can say there were periods when this kind of thinking flaired up, like a bad virus. The Renaissance perhaps (with big caveats). 19th century art *by and large* is not overtaken by this pre-20th century hubris, in part (lol) because of the industrial revolution and its consequences.
I don't think the renaissance really counts because they hadn't really discovered the idea of "progress", yet. Theirs was a "back to the original roots" kind of movement after all.
There is a view in Renaissance art that it could only aspire to certain Classical forms which could never be surpassed, at least for purity of expression. But the Renaissance was absolutely aware that it had access to technologies (both material and intellectual) well in advance of what had gone before, alongside the “rediscovery” of those which had been lost (to Western European Christians) in the intervening centuries. Simply the fact that the Renaissance had Christianity, rather than paganism, was something of which the period was very self-conscious.
Tbf, if, say, portraiture is what some group of artists deal in and making realistic representations for clients is their entire reason for existing, can understand exactly why they'd think that (especially allied to ideas about scientific progress). All the "primitive" art would indeed look primitive, like "haha, they hadn't even figured out basic perspective, the morons". ...and then photography drops (amongst other things) and most of the reasoning behind that attitude evaporates. (I'm pretty happy that generally, and collectively, as a civilization, we've kinda realised that that idea of art isn't a very good one)
Right, but portraiture has almost never been in the business of realism. Realism as a virtue of *some* portraiture does emerge periodically in waves (this happens in places in Classical Greek and Roman art, for example), but rarely as the locus of “portraiture as art” which historically is always also moved by contemporary aesthetic tendencies and of course the value that the subject or buyer (an aristocrat, for example) places on a pleasing rather than a factual likeness. Even a Vermeer or Velasquez’s [Pope Innocent X](https://en.m.wikipedia.org/wiki/Portrait_of_Innocent_X), in their striking attention to detail, *explicitly* perform an essential realism towards the portrayal of a heightened reality, a wildly coloured and expressive version of reality which is so to speak “more real” than a comparable photograph would have captured. Of course, a photograph can also perform such a realism to the same end, but this only draws attention to the fact that photographer and painters have always been aware that their arts are memetic, and never substitute for the real thing. In fact it’s true that aspiringly photorealistic art is a post-photography invention. These are artists imitating the kind of mimesis done by photographs, whereas impressionists and Leonardo de Vinci alike are interested in imitating the *effects* of light in its complex interaction with the subject and eye, not with ideally representing the three-dimensional object in a two-dimensional plane. So to be honest that attitude had no particular reason to evaporate. Of course, if we look at things from a reductionist “Two Cultures” point of view, those in the “literary” (or artistic) culture which CP Snow so tendentiously and falsely characterises as inherently conservative, *would* see things in these terms, if they existed. But they don’t exist, and things have never been like that, the problem is that people like our friend in the screenshot read or read about CP Snow, but they have no interest in reading the people he’s discussing.
Ah completely agree, I should have qualified that a bit more, I meant heightened reality, and photography I just feel is [one important] marker as to when production of machine-reproducable imaging started to go into overdrive, and [one major point] where purely representative art really starts to lose ground (even then afaics it's gradual, fits and starts, until you hit the C20th
I mean one of the reasons you can see that is obviously nonsense is that late antiquity goes from the classic "heightened realism" to a much more abstracted representations... And often done by the same artists.

Fun fact about google translate. Part of making a good translation model is to have a huge corpus of text that’s correctly translated in both the languages you want to translate between. Unfortunately huge volumes of text that are well translated in multiple languages are pretty darned rare, and your highly translated texts like the classics or pop culture hits like Game of Thrones aren’t enough.

So the body of text most heavily used is UN and EU documents, which are both voluminous and produced in multiple languages in parallel. They also are very, very dry, which does not help the quality of the translate product.

It will be a long, long time before machines can translate creative works up to the standard of a manual translation. I suspect that it won’t happen during my lifetime, tbh.

I am a specialist in machine translation. I don’t think it will ever be as good as human translation for creative writing or for highly technical writing like legal writing. There are too many human worldview assumptions baked into language. Machines may get smarter than humans but can never get more human than humans. But machine translation will get good enough for 95% of translation needs.
Example for those who are curious, a sentence like "I couldn't put the book in the bag, because it was too small" is hard to translate for any machine, because you need to know whether "it" refers to the book or the bag to accurately translate into a language like Spanish, where book and bag are different genders. It's obvious to a human, because you know what a bag is and what it's used for, but not to a machine.
Another example I like is translating the sentence “Frank gave Ed his bicycle” from English to Danish. Unlike English, Danish has two separate words for “his” depending on whether it refers to Frank or Ed. If you translate that sentence in isolation, you would likely think that it is Frank’s bicycle. But if it had been established (maybe several chapters ago) that Ed had lent Frank his bicycle, a human translator would realise that Frank is merely returning it and the “his” refers to Ed. It will be a long time before machines can do this type of discourse-level analysis.
I don't speak Danish, but that makes complete sense. I mean, if you've ever played AI Dungeon, you know it sort of works moment to moment, but it has a really hard time remembering events from a few scenes ago and will even sometimes forget your location if it hasn't been mentioned in a while and place you somewhere else. It's like how other people's dreams are really boring, because there's no real continuity through it.
If it's a standard transformer, it has a maximum context window of input and no way of enforcing coherence with earlier text from before that.
Sort of the same issue as the image models I guess. It will draw a detailed scene, but much of the details of clothing or backdrop or anatomy will be sort of nonsensical if you look closely. Tt has only learned statistical relationships between patterns of pixels in its training images, but doesn't know that the training images are representations of the real universe or imagined versions of it, and while the rules it learned about how different features in images relate somewhat correspond to the rules of the real world, it's bullshitting a lot of the time.
It’s been interesting to watch NLP/machine translation swing more back to this perspective over the last decade. I remember learning a bunch of ML and translation approaches from Norvig’s AI textbooks and 15 years later his insistence that enough web crawls will result in beautiful prosidy seems… quaint. Even if you stick to well known corpora the big ML approaches just give you word salad instead of Cervantes. Getting something the quality of English As She Is Spoke out of two non-European languages with most statistical inference systems still feels like an absurd pipe dream! Of course the ML/AI hype train demands that we constantly insist that true AI genius is half a decade away, just insert a few million of R&D funding and we shall all be exponential growth forever!
I suppose I see it the opposite way. I’m amazed how good Google Translate is already. Knowing how difficult the problem is. But it’a classic Pareto Principle: 20% of the effort gets you 80% of the progress.
DeepL is actually amazing at interpreting phrases and rendering a translation that goes far beyond simply word for word translation. Star Trek it ain't, but its one of the more powerful translators out there right now.
There’s been great progress in those techniques, but I will note that almost all of them have really serious flaws in terms of bias and tendency to regurgitate weird subsets of their training models. The GPT-3 preimage attacks are a fun new class of ways to poke at how to get the emperor to show how few clothes they ever had.
So I take it you mostly work with asciibetical languages then? Google Translate is so, so, so bad on East Asian languages I understand.
Germanic languages yes.
Yeah so besides English, I speak a bit of “get me out of this meeting and to the Biergarten” German and a bit more Belgian/Flemish. Once you get out of closely related EU languages there’s a rapid fall of in quality for Reasons.
Yes, that is no surprise to me. Sprachbund languages are more grammatically similar and also tend to share similar vocabulary. Also there is a much larger corpus of translated texts available for training.
Once again SVO hegemony wins again! #chomskyjoke
(Although technically most Germanic languages are V2, not SVO!)
Curses, ruined by facts not determined by the universal grammar I dreamed up in my head again!
I am deeply skeptical of any argument that machines will *never* be able to do something, regardless of the thing.
I've found that machine translations can be pretty good for highly formulaic stuff like recipes, but even then they can get tripped up fairly easily.
That and depending on *why* you’re translating, the choice of words can make or break you. Google Translation is okay if you’re a tourist trying to find the bathroom. But if you’re doing anything technical, medical, or especially legal using Google. The difference in world choices can change things enough that the contract doesn’t say what you think it says.
Or even just imagine writing a beautiful novel and then thinking "Eh, I'll just run it through Google Translate so Spanish-language readers can read it." It's ridiculous to claim that Google Translate can take the place of human multilingualism.
Pah, this is just a lack of data, with enough data this kind of translation is easy. It will just search the dataset for a previous full translation a human made of that book and give it to you. Easy, fixed by just translating all human texts into all human languages. And for my next trick I will solve the travellings salesman problem.
There's also a lot of crowdsourced human labor going on. Google Translate improves over time because people can correct translations or suggest better ones. A lot of what's presented as "AI" is actually just masses of invisible people doing micro-tasks.
The machines will take over, sooner or later.
Nah they dont have to, ai in the form of corporations already has.
Wait until your job gets automated
That is literally impossible.
Still waiting for that robot taxi I was promised.
Self driving cars can be found at google headquarters. It's only a problem of scaling now, not of design or functionality.
Right, this is tacitly admitting that they don’t really work, you just don’t realize it. The entire premise of self driving cars, at least the ML heavy ones designed by Google and others is that they inherently scale. No difficult and expensive road changes, no fragile laser mapping of the real world, once you finish programming them you send them out into the world and they just work. Sure, you need to buy lidars, but the elimination of the human means this capital expenditure is trivially recouped, hence Uber’s business plan. Scaling is about making more cars and more computers, and we’re already very good at that as a society. The fact that Google has not “scaled” a potentially massively profitable product outside of their headquarters means that they don’t think it really works in real world scenarios. They have the engineers and experience to scale software projects, the hardware experience to build the electronics, and the money to build the cars. If they aren’t currently mass producing them and sending them out to run as taxis in even major cities, that’s because they know it doesn’t work. (Hint: Waymo said in 2018 that full self driving is still 10 years off. Funny how it’s always 10 years away…) Instead what we actually see is that self driving cars only show up in very controlled circumstances, either with a human there to monitor, or in a well mapped and controlled environment like Google’s headquarters. In cases where “self driving” cars are used in vaguely real world scenarios, the results are shit. Semi trucks driving into freeway walls, Tesla’s turning the wrong way down streets or running down mock children, and Uber’s killing pedestrians. The results are not exactly confidence boosting. The brutal reality is that designing a self driving car capable of putting about at 20mph through a large industrial park is pretty easy for a company like Google. Speeds are low, the routes are controlled and well lit, and when in doubt you can stop. The real world is chock full of nasty edge cases, bad weather, bad roads, poor lighting, and high speeds that make mistakes fatal. The process of handling all of the edge cases humans deal with all the time is going to take … actually we have no clue how long it’s going to take. It might be 2028 like Waymo says, it might not happen during my lifetime. (My favorite edge case was a Tesla freaking out about stop signs in the road making the human drive it. Turns out it was following a DOT truck carrying a load of stop signs. The real world is like that). In the mean time we already have the technology to provide most of the promised safety and efficiency gains of self driving. It’s called public transit.
Good effortpost this one, and it is missing one minor thing. This all assumes that nobody will try to fuck with the robotcars. Luddites who lost their jobs will eventually talk to enough anarchist tech nerds to figure out how to do adversarial attacks on the machine learning of the cars and mess things up in expected way. [Like this](https://spectrum.ieee.org/slight-street-sign-modifications-can-fool-machine-learning-algorithms) and last I heard these kinds of attacks are unavoidable with ML.

Language/Art? Where we are going we don’t NEED language/Art.

jumps into the paperclipofier

Haha that’s such a CGP Grey thing to say too. I fuckin hate that channel. They shamelessly shill for Waymo and self-driving cars in general with the same sort of approach. I remember they made a video about why traffic backups happen, and it was basically like “Well, each person in sequence takes a short time to react to the person ahead of them, and that delay accumulates down the line,” which, sure, is one reason traffic backups happen. And then he just says “Self-driving cars are the answer.” Complete sentence. I literally laughed out loud a bit.

I actually worked at one of the companies referenced in the “Humans Need Not Apply” video. It’s almost comical how badly he overstated their capabilities. I actually got ahold of him via email to politely point that out (I was still a fan then, so it was polite) and I got an empty “thanks” reply. In retrospect it’s pretty clear that he’s really not a good faith operator.
He unironically thinks Whig history is factually accurate which no actual historian has taken seriously in a century.
jesus now I am angry about that self-driving cars quote.

I just really want to see his work now.

This is a weird take because AI art generators (which is what I assume they’re referring to) only work so well because there is a huge corpus of existing art to use as training data. The math problem that machine learning solves is essentially “how do we make a system that replicates patterns present in this data”. You couldn’t somehow skip the step of actually making the data(art) and go straight to AI models.

Yeah, AI art models are a big technological advancement in art, but all the existing non-AI art was directly necessary to make this advancement happen.

Begone, J. Evans Pritchard, PhD

Blue guy is right

I stand by what I said. People will implement it soon, if they haven’t already. Aesthetic rating networks are a thing, and image generators are capable of combinatorial generalization, so it’s probably possible to use search (or maybe even gradient descent) to find images that are better than the ones in the training set (according to the metric), and then train it with those. The success of these techniques depends on the critic not being goodharted, so the results might be inferior to training it with human-curated data, but that is more expensive.

Is there any flaw in this reasoning?

the flaw is that you don't know what art is
art is pretty pictures and the more pretty points you can assign to art the more art it is. quite straightforward imo
We sell our art for $20/pretty point. Finally, an objective standard for art markets. Progress!!
Go on...
eom
People will implement *what* soon?
Train an aesthetic measuring network Take a set of prompts Generate many images for each Select the best ones according to the network as long as they are sufficiently realistic (according to the generator or other net) and still match the prompt Finetune on those Or something like that
Please don’t take this as bait or something, but to clarify: you’re saying that a process of iterated ranking on AI generated digital images is what’s going to best art, that’s the whole concept “art”, as created by humans?
It will at least do so for a big subset of the task of generating the best image given a prompt, that was the context of the comment the post is about. I also expect that the same will be done to music soon, at least if we ignore the lirics. But in the future, when machines "understand" the world better, something along these lines will be applicable to art in general.
I’m afraid I don’t think that *was* the context of the conversation, which I’ve already read past this screenshot, because the term “best” or “best image” doesn’t show up at all. *That* conversation was about the ability of “techbros” to barge into an existing sphere of human endeavour and have all the answers. If “how do we make the best image” was the question of art then I suppose we have our answer as far as you’re concerned, but I don’t know what your criteria are for “best image”.
My criteria for best image is whatever someone considers the best image. This varies between people, but models can take this into account. Other areas of art (all of them?) also follow the pattern of there being a data structure that people can prefer over others, and optimizing it is a problem that machines will eventually basically solve.
I think the word you were looking for when you said “art” was “decoration”
Even dumbing it down to decoration, the idea that we can produce a model to select the most attractive decoration is hilarious in its over confidence. What are fads anyways?
You're conceptualizing "art" as "the best image given a prompt"? Was AI the first time you heard of the concept of "art"?
> It just so happens to be the case that the solution to most problems is to make a machine to solve that problem. It shouldn't surprise you that tech people were able to make so much progress in art, when artists have for millennia basically refused to treat it like what it (and everything else in life) is, a math problem This is another genuinely non-bait question. This is something I’ve seen crop up with tech-minded people a lot recently: they think artists are as a group averse to technological solutions, to using technology in interesting ways, to admitting that technology plays a big role in the creation of art. In my experience this is the opposite of the truth, although artists *do* worry about the power of technology - in particularly technology-based financial *bubbles* - to eat them out of a career, so I’m curious where you got this idea that artists have quote “refused for millenia” to do the technology thing. I genuinely want to know: where specifically did you pick up this idea, and am I wrong in taking it to simply be a falsehood?
Like, the entire realm of photography and optics was invented by artists. The idea that artists have been hostile to technology for millenia is fucking laughable.
I wasn't talking about technology, but about seriously treating art as a mathematical problem, or something to be analyzed precisely at all, rather than just doing stuff like saying some structure follows the golden ratio or something like that. Granted, in the past neuroscience was too primitive to properly study art (and it basically still is), and creating image generators was not feasible, but what I said is basically true.
Right… Now I have no idea what problems you think mathematics will be solving in the future, besides your being very straightforwardly mistaken about the historical relationship between art, optics, and mathematics
I'm not mistaken. People have preferences over trajectories reality can take. Part of that considers whether what they see is pretty (but obviously art is about more than that). If you want to solve art (or understand it properly at all) you need access to that rating function. You can do it by either studying the brain directly or by observing human behavior (like the score they give to an image) and fitting a model to reconstruct that part of their minds. I'm pretty sure the vast majority of artists don't think about art this way, but that's how you study it in mathematical terms.
[deleted]
What else will you use to rate art other than people's opinion? And how does that relate to the end of the world? (or to bad things in the world?)
> What else will you use to rate art other than people's opinion? This is precisely the point about tech bros barging in and acting like they’ve solved something that the humanities have been wrangling with for millenia. What is art, and how do we evaluate it is an incredibly rich area of discussion with centuries of written text, and you clearly have read none of it. Even if we accept the premise that art can be ranked by opinion, and that is an extremely controversial thing in its own right, even a high school art history class would demonstrate serious flaws with a plan to feed opinion into a machine to rate art. The past century of art is littered with examples of artistic movements that were *hated* by critics and the public when they released, only to become beloved and genre defining later. Van fucking Gogh literally couldn’t sell his paintings for years and committed suicide because of it. If one of the most famous and celebrated artists of the 19th century was unpopular and unsuccessful during his lifetime, then why are we presuming that peoples opinion on art is a stable thing on which we can build anything?
>but about seriously treating art as a mathematical problem, or something to be analyzed precisely at all, This alone tells me you have absolutely no knowledge of art practice or history. I dont know why you keep digging yourself deeper and deeper when its so obvious to everyone reading this that you just dont know what you're talking about.
So what are your views concerning the evolution of art? Given that you believe that music will follow suit behind the visual arts with regard to algorithmic production, do you think AI will be able to advance art, music in this case, in a meaningful way? How far are neural networks constrained by the limits of their datasets? I have noticed that AI generated art oftentimes has a sameness to it depending on popular prompts which are passed around, such as taking on the appearance of the most popular digital artworks from a particular dataset, the 'artstation aesthetic' if you will, which at best mimics a small subset of popular artists. Can AI meaningfully extract from a small dataset and extrapolate while also incorporating a wider dataset, mindfully or no, in the same way that human beings often do when creating art? It strikes me that many if not all artists evolve in an organic fashion that may or may not include deliberate influence but always builds upon an unconscious selection process derived from a wealth of external influence and subjective interpretation of those influences which allows them to create what might be referred to as an original amalgamation. Can AI be trained to do more than amalgamate visual and sonic reference, can it imbue those references with coherent meaning the way that a great artist can while also creating a piece of art that will stand alone as striking? Just because many people don't consider the cultural history imbued in a particular piece of art that doesn't mean that it wasn't integral in its creation and its reception. I believe there is far more to meaningful artistic expression than simply analysing what combinations of aesthetic or sonic components please the senses, rather that is only a part of the whole. Its why I suspect that literature, and narrative in general, will be the most difficult hurdle for AI to overcome in a truly meaningful way. It will be an effective tool in the creation of art no doubt, but like Borges library of Babel sinply being able to access an infinite quantity of books filled with every possible combination of letters does not guarantee that one will actually be able to locate Shakespeare among them especially if one does not know what they are looking for. Eventually we'll get there but I would imagine that when we do we will, hopefully, be treating an AI of this capacity as more than a mere tool.
Please do not post entire essays which lend credulity to the morons
Why not? Is this user even worthy of sneering, or is it just some anonymous user that this sub decided to pick on because they are misinformed? Seems a bit like low hanging fruit and not in the spirit of this sub to be perfectly honest. (I posted this last night and haven't followed the discussion since then for reference, I don't know how their views have unfolded and just wanted to challenge their idea that art was something that could be 'solved' by positing that there exist strata of art beyond the realm of 'pretty pictures', which I don't believe can be reduced to a set of algorithms interpreting from datasets)
This person is a techbro type who wants to optimise all of art with programmer know-how and a smattering of neuroscience, and came in here after being amusingly screenshotted to argue their depressing case at unbearable length, why on Earth do you think it’s not in the spirit of the sub to sneer at that? I mean this is worthy of a post on its own > **Eventually we'll get there** but I would imagine that when we do we will, hopefully, be treating an AI of this capacity as more than a mere tool.
I just thought the general target was higher than some anon from the SSC forum with little to no influence though I defer to you on that front. I don't think my post was any less of a refutation than the guy who stated that trying to optimise the creation of art through machine learning would only lead us deeper into the creatively bankrupt quagmire of popular culture vis a vis the US film industry, just less aggressive. >I mean this is worthy of a post on its own Sneer away, it has been fairly quiet around here lately anyway.
I’m not going to litigate the difference between different posts beyond that at the time I happened to log on yours was an encouragement to debate and the other was not
Fair enough.
don’t take the mod comment to heart or whatever, I was in a moment This particular person really got under my skin
No hard feelings whatsoever. I'm just a naive sneerer. You folks do good work here as far as I'm concerned as evidenced by the number of people I see in various threads proudly proclaiming their escape from the quagmire of rationalism by way of this subreddit. To be honest I hoped I might be able to make a similar contribution in this particular case but some people are unreachable.
This sort of a scheme simply leads you to the current state of the US film industry. An endless echo of safe familiarity that a broad sector of the market rates highly. What in the fuck are better ML models applied here gonna do besides reinforce this trend? The problem is that opinion rating is an impoverished metric by which to measure art, even in spite of the fact that our opinion of it is all we have. You can construct passable or maybe even good art from preference ranking, but the idea that in the limit this produces an "optimum" is delusional. Why do you guys in the ratsphere get to claim Goodhart only where it suits you? Moreover, art is a culturally reflexive process, so even the notion that you COULD do this would be eaten up by artists and spat back in your face. In that regard alone it's self-defeating. That's not to say corporations won't try. They already are, and have been for a long time. It will only result in immense cultural damage. You will probably look at some of the responses in this thread and go to yourself "haha, these stupid plebs think that art is some magical phenomenon above the laws of physics! Art can also be computed!" Let's disregard the fact that Church-Turing-Deutsch is still very much a thesis and assume it's true. It doesn't mean that art WILL ever actually be reduced to a deterministic, computational model. Maybe the model is too big to ever feasibly compute. Maybe it requires data that isn't feasibly accessible. Maybe the nature of art is such that it actually just resists there being a stable or semi-stable "optimum" (a la its self-reflexive nature that I described before). This is some real "the map is the territory" shit with you guys. Optimisation problems are one specific framework for DESCRIBING problems. The fact that problems can be DESCRIBED as problems of optimisation does not actually make them problems of optimisation. For some problems, an optimisation model is a good fit. For others, it is terrible fit. Why is this so hard for you guys to understand? Actual scientists and systems engineers get this, but no one on LW does. You're not dropping Cold Hard Logic Bombs that we can't comprehend because of our tiny mushy human minds, you're just outing yourself as a delusional pseud.
This may only apply for streaming, but I suspect the ML models will actually improve here and are going to lead us down a more annoying path than simply reinforcing the current trend of blandness. I’d wager that engagement with content is probably more indicative of value for streaming services than some aggregate of opinions ratings. The volume of online discourse generated by the topic of content is at least one of the drivers of engagement. I can see ML getting much better at predicting the volume of engagement generated by potential content topics. I’d anticipate more low budget miniseries based on tabloid stories and documentaries shallower than wikipedia articles. Still lacking substance, but also controversial(in the right ways). You also don’t need licenses or even expensive actors as much if you boost it to your own front page. Good margins!
Yeah that's a good point actually. When I said that ML models would only reinforce current trends, I more meant in terms of basically SEO gaming for cinema, rather than blandness. So your example (which is undoubtedly correct and already happening at least a little bit) is even more evidence in support of what I'm saying – ML models don't converge to an "optimum" piece of art, but just engagement-primed sludge.
You need to make very weird assumptions to come to that conclusion. Unlike the film industry, the policies of ML models can perform a lot of exploration instead of just exploitation. They are capable of combinatorial generalization instead of just generating points inside some sort of abstract convex hull formed by the training data. And they could make art that you, instead of the general public, would rate highly. Do you think that it will produce art that you say is good but somehow actually isn't? And actually solving art won't consist in creating the optimal art piece. It would be an algorithm that given everything that has happens up to a point will make the best art possible in that moment, like a human artist would, but better. I really doubt that the artistic value function (which might be impossible to separate from the whole value function, and yes, I'm assuming human value functions are a thing) is that big. Data found freely online is probably more than enough to reconstruct it very well. And in principle I also doubt it is algorithmically much more complex (as in description length) that a human mind, which are small and can probably be accessed for study by cutting and scanning a brain layer by layer. And for art in particular optimization probably is the correct way of framing it, since we have a theoretical preference ordering among art pieces (or world trajectories). You might say that doesn't matter and that we could say the same about proving theorems or landing on the moon, but that treating them as optimization problems is a terrible idea since pure search doesn't work. But actual algorithms won't work by directly searching for good art, but by searching for good artistic algorithms, just like humans got to the moon by performing a search (evolution) for competent organisms. There's a difference in that in art we are directly optimizing for the objective, instead of using a proxy, but the algorithmic performance landscape is probably much less discontinuous than the one of the capacity of an organism to go to the moon.
Oh yeah, sorry, I'm the one making the weird assumptions about the nature of the world. You've vaulted over every single argument I made and gone straight to making bizarrely confident claims about the shape of a hypothetical mathematical function describing a cultural process that has resisted definition since it's inception. Do you really not see how you tenuous your claims are? At a very basic level, it doesn't matter how good whatever artefact the machine makes is, even if it was perfectly tailored to my individual tastes. I will value it far less than something human-made. This isn't me being facetious or pretentious, it's actually just how I've lately realised I value art. Messing around with Stable Diffusion has been for me most instructive in this regard. Many people (not everyone, I know) feel the same. And this will feed back into culture itself, revealing your proposition as self-defeating. I don't care how creatively you try to describe the loss landscape of your art algo to me, it cannot actually evade this fact that I have pointed out. I mean this seriously dude, log off of reddit, log off of LW, read some Kuhn, talk to real human people, look at some clouds for a while, take a deep breath. You are fundamentally misguided about how art and the world around you work, and this must be damaging to you in more ways than shitty reddit comment threads.
this is embarassing and you’re 15 years old.
The fact that you believe you can mathematically "solve" something subjective like art should terminate this discussion instantly because it is clear you *do not* understand art conceptually. Sheer idiocy wrapped in arrogance.
Visual art has already been perfected in the "Socialist Realism" movement. The only thing left to do is use AI to mass-produce endless iterations of this kind of work.
Your flaw is that you think that any numbers the computer spits out must be real. This is how a toddler understands the world.
https://youtu.be/vPeRElll3Hw