the writer Nina Illingworth, whose work has been a constant source of inspiration, posted this excellent analysis of the reality of the AI bubble on Mastodon (featuring a shout-out to the recent articles on the subject from Amy Castor and @dgerard@awful.systems):

Naw, I figured it out; they absolutely don’t care if AI doesn’t work.

They really don’t. They’re pot-committed; these dudes aren’t tech pioneers, they’re money muppets playing the bubble game. They are invested in increasing the valuation of their investments and cashing out, it’s literally a massive scam. Reading a bunch of stuff by Amy Castor and David Gerard finally got me there in terms of understanding it’s not real and they don’t care. From there it was pretty easy to apply a historical analysis of the last 10 bubbles, who profited, at which point in the cycle, and where the real money was made.

The plan is more or less to foist AI on establishment actors who don’t know their ass from their elbow, causing investment valuations to soar, and then cash the fuck out before anyone really realizes it’s total gibberish and unlikely to get better at the rate and speed they were promised.

Particularly in the media, it’s all about adoption and cashing out, not actually replacing media. Nobody making decisions and investments here, particularly wants an informed populace, after all.

the linked mastodon thread also has a very interesting post from an AI skeptic who used to work at Microsoft and seems to have gotten laid off for their skepticism

  • @kromem@lemmy.world
    link
    fedilink
    English
    -61 year ago

    Hilarious.

    Only five years ago no one in the computer science industry would have taken a bet that AI would be able to explain why a joke was funny or perform creative tasks.

    Today that’s become so normalized that people are calling things thought to be literally impossible a speculative bubble because advancement that surprised everyone in the industry initially and then again with the next model a year later hasn’t moved fast enough?

    The industry is still learning how to even use the tech.

    This is like TV being invented in 1927 and then people in 1930 saying that it’s a bubble because it hasn’t grown as fast as they expected it to.

    Did OP consider the work going on at literally every single tech college’s VC groups in optoelectronic neural networks and how that’s going to impact decoupling AI training and operation from Moore’s Law? I’m guessing no.

    Near-perfect analysis, eh? By someone who read and regurgitated analysis by a journalist who writes for a living and may just have an inherent bias towards evaluating information on the future prospects of a technology positioned to replace writers?

    We haven’t even had a public release of multimodal models yet.

    This is about as near perfect of an analysis as smearing paint on oneself and rolling down a canvas on a hill.

    • @bitofhope
      link
      English
      111 year ago

      This is like TV being invented in 1927 and then people in 1930 saying that it’s a bubble because it hasn’t grown as fast as they expected it to.

      That’s the exact opposite of a bubble, then. A bubble is when the valuation of some thing grows much faster than the utility it provides.

      Yea sure maybe we’re still in the early stages with this stuff. We have gotten quite a bit further from back when the funny neural network was seeing and generating dog noses everywhere.

      The reason it’s a bubble is because hypemongers like yourself are treating this tech like a literal miracle and serial grifters shoehorning it into everything like it’s the new money. Who wants shoelaces when you can have AI shoelaces, the shoelaces with AI! Formerly known as the blockchain shoelaces.

    • @selfOPA
      link
      English
      101 year ago

      holy christ shut the fuck up

      • @selfOPA
        link
        English
        91 year ago

        Did OP consider the work going on at literally every single tech college’s VC groups in optoelectronic neural networks and how that’s going to impact decoupling AI training and operation from Moore’s Law? I’m guessing no.

        uhh did OP consider my hopes and dreams, powered by the happiness of literally every single American child? im guessing no. what a buffoon

        • @Soyweiser
          link
          English
          9
          edit-2
          1 year ago

          Only five years ago no one in the computer science industry would have taken a bet that AI would be able to explain why a joke was funny

          Iirc it still couldn’t do that, if you create variants of jokes it patterns matches it to the OG of the joke and fails.

          or perform creative tasks.

          Euh what, various creative tasks have been done by AI for a while now. Deepdream is almost a decade old now, and before that where were all kinds of procedural generation tools etc etc. Which could do the same as now, create a very limited set of creative things out of previous data. Same as AI now. This chatgpt cannot create a truly unique new sentence for example (A thing any of us here could easily do).

          • @Akisamb@programming.dev
            link
            fedilink
            English
            21 year ago

            This chatgpt cannot create a truly unique new sentence for example (A thing any of us here could easily do).

            What ?

            Of course it can, it’s randomly generating sentences. It’s probably better than humans at that. If you want more randomness at the cost of text coherence just increase the temperature.

            • @Soyweiser
              link
              English
              41 year ago

              People tried this and it just generated the same chatgpt trite.

            • @selfOPA
              link
              English
              21 year ago

              Of course it can, it’s randomly generating sentences. It’s probably better than humans at that. If you want more randomness at the cost of text coherence just increase the temperature.

              you mean like a Markov chain?

              • @Akisamb@programming.dev
                link
                fedilink
                English
                1
                edit-2
                1 year ago

                These models are Markov chains yes. But many things are Markov chains, I’m not sure that describing these as Markov chains helps gain understanding.

                The way these models generate text is iterative. They do it word by word. Every time they need to generate a word they will randomly select one from their vocabulary. The trick to generating coherent text is that different words are more likely to happen depending on the previous words.

                For example for the sentence “that is a huge grey” the word elephant is more likely than flamingo.

                The temperature is the way you select your word. If it is low you will always select the most likely word. Increasing the temperature will make the random choice more random giving each word a more equal chance.

                Seeing as these models function randomly there is nothing preventing them from producing unique text. After all, something like jsbHsbe d dhebsUd is unique but not very interesting.

        • Steve
          link
          English
          61 year ago

          Teach me!

    • @200fifty
      link
      English
      101 year ago

      The industry is still learning how to even use the tech.

      Just like blockchain, right? That killer app’s coming any day now!

    • @YouKnowWhoTheFuckIAM
      link
      English
      81 year ago

      Did OP consider the work going on at literally every single tech college’s VC groups in optoelectronic neural networks built on optical components to improve minimisation and how that’s going to impact the decoupling of AI training and operation from Moore’s Law that’s one hope for making processing power gains so that the banner headlines about “Moore’s Law” are pushed back a little further? I’m guessing no.___

      You have the insider clout of a 15 year old with a search engine

      • David GerardMA
        link
        English
        61 year ago

        You have the insider clout of a 15 year old with a search engine

        my god

    • @swlabr
      link
      English
      81 year ago

      This is about as near perfect of an analysis as smearing paint on oneself and rolling down a canvas on a hill.

      That sounds perfect to me dawg

    • Steve
      link
      English
      81 year ago

      Have a browse through some threads on this instance before you talk about what the “computer science industry” was thinking 5 years ago as if this is a group of infants.

      If you feel open to it, consider why people who obviously enjoy computing, and know a lot about it, don’t share your enthusiasm for a particular group of tech products. Find the factors that make these things different.

      You might still disagree, you might change your mind. Whatever the fuck happens, you’ll write more compelling posts than whatever the fuck this is.

      You might even provoke constructive, grown-up, discussions.

      • @selfOPA
        link
        English
        71 year ago

        I must note that the poster in question earned the fastest ever ban from this instance, as their post was a perfect storm of greasy smarmy bullshit that felt gross to read, and judging by their post history that’s unfortunately just how they engage with information

        • Steve
          link
          English
          41 year ago

          Oh good. Their history was why I relented and wrote something. A typical king shit.