[I literally had this thought in the shower this morning so please don’t gatekeep me lol.]

If AI was something everyone wanted or needed, it wouldn’t be constantly shoved your face by every product. People would just use it.

Imagine if printers were new and every piece of software was like “Hey, I can put this on paper for you” every time you typed a word. That would be insane. Printing is a need, and when you need to print, you just print.

  • wewbull@feddit.uk
    link
    fedilink
    English
    arrow-up
    148
    ·
    1 month ago

    I think that it’s an astute observation. AI wouldn’t need to be hyped by those running AI companies if the value was self-evident. Personally I’ve yet to see any use beyond an advanced version of Clippy.

    • Karyoplasma@discuss.tchncs.de
      link
      fedilink
      arrow-up
      39
      ·
      1 month ago

      I use it to romanize Farsi song texts. I cannot read their script and chatGPT can. The downside is that you have to do it a few lines at a time or else it starts hallucinating like halfway through. There is no other tool that reliably does this, the one I used before from University of Tehran seems to have stopped working.

      • biofaust@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        1 month ago

        Did the same yesterday with some Russian songs and was told by my Russian date that it was an excellent result.

        • Karyoplasma@discuss.tchncs.de
          link
          fedilink
          arrow-up
          3
          ·
          1 month ago

          Yeah, Russian is quite a bit easier to romanize, so it should work even better. For cyrillic, you can just replace each character with the romanized variant, but this doesn’t work for Farsi because they usually leave out non-starting vowels, so if you did the same approach you’d get something unreadable lol

      • sigezayaq@startrek.website
        link
        fedilink
        arrow-up
        8
        ·
        1 month ago

        I use it to learn a niche language. There’s not a lot of learning materials online for that language, but somehow ChatGPT knows it well enough to be able to explain grammar rules to me and check my writing.

      • chellomere@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 month ago

        Interesting use case. Sometimes you can find romanizations on lyricstranslate, but this is kinda hit and miss.

    • village604@adultswim.fan
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 month ago

      That’s just not true at all. Plenty of products are hyped where the value is self-evident; it’s just advertising.

      People have to know about your product to use it.

      • wewbull@feddit.uk
        link
        fedilink
        English
        arrow-up
        30
        ·
        1 month ago

        There’s a different between hype and advertising.

        For one, advertising is regulated.

      • RedstoneValley@sh.itjust.works
        link
        fedilink
        arrow-up
        29
        ·
        1 month ago

        It’s not “just advertising”. It’s trying to force AI into absolutely everything. It’s trying to force people to use it and not giving a shit if customers even want the product. This is way, way worse than "just advertising“

      • The_v@lemmy.world
        link
        fedilink
        arrow-up
        12
        ·
        1 month ago

        There’s a vast difference between advertising a good product that is useful to hyping trash.

        Good products at a reasonable price usually require a brief introduction but quickly snowball into customer based word-of-mouth sales.

        Hype is used to push an inferior or marginally useful product at a higher price.

        Remember advertising is expensive. The money to pay for it has to come from somewhere. The more they push a product the higher the margin the company/investors expect to make on its sales.

        This is why if I see more than one or two ads for a product it goes on my mental checklist of shit not to buy.

      • [deleted]@piefed.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 month ago

        Shoving AI into everything and forcing people to interact with it, even when dismissing all the fucking prompts, is not advertising.

        • Tollana1234567@lemmy.today
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          it means these companies are losing money on keeping the AI datacenter open, so they need someway to recoup some of the money they spent, by shoveling into the products they sell, or selling it to a sucker who is willing to implement AI everywhere, the subs discussed its going to be retail who ends up with the useless AI.

      • brbposting@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        You’re right that the use cases are very real. Double checking (just kidding never would check in the first place) privacy policies (then actually reading(!) a couple lines out of the original 1000 pages)… surfacing search results even when you forgot the specific verbiage used in an article or your document…

        Do you also see some ham-fisted attempts at shoehorning language models places where are they (current gen) don’t add much value?

  • Zachariah@lemmy.world
    link
    fedilink
    arrow-up
    61
    ·
    1 month ago

    My top reasons I have no interest in ai:

    • if it was great, it wouldn’t be pushed on us (like 3D TVs were)
    • there is no accountability, so how can it be trusted without human verification which then means ai wasn’t needed
    • environmental impact
    • privacy/security degradation
  • Grandwolf319@sh.itjust.works
    link
    fedilink
    arrow-up
    51
    ·
    1 month ago

    If AI truly was the next frontier, we wouldn’t be staring at the start of another depression (or a bad recession). There would be a revolution of innovations and most people’s lives would improve.

  • Underwaterbob@sh.itjust.works
    link
    fedilink
    arrow-up
    48
    ·
    1 month ago

    Long ago, I’d make a Google search for something, and be able to see the answer in the previews of my search results, so I’d never have to actually click on the links.

    Then, websites adapted by burying answers further down the page so you couldn’t see them in the previews and you’d have to give them traffic.

    Now, AI just fucking summarizes every result into an answer that has a ~70% of being correct and no one gets traffic anymore and the results are less reliable than ever.

    Make it stop!

  • Krudler@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    1 month ago

    AI has become a self-enfeeblement tool.

    I am aware that most people are not analytically minded, and I know most people don’t lust for knowledge. I also know that people generally don’t want their wrong ideas corrected by a person, because it provokes negative feelings of self worth, but they’re happy being told self-satisfying lies by AI.

    To me it is the ultimate gamble with one’s own thought autonomy, and an abandonment of truth in favor of false comfort.

    • Iced Raktajino@startrek.websiteOP
      link
      fedilink
      arrow-up
      18
      ·
      1 month ago

      To me it is the ultimate gamble with one’s own thought autonomy, and an abandonment of truth in favor of false comfort.

      So, like church? lol

      No wonder there’s so much worrying overlap between religion and AI.

  • bridgeenjoyer@sh.itjust.works
    link
    fedilink
    arrow-up
    31
    ·
    1 month ago

    Had the exact same thought. If it was revolutionary and innovative we would be praising it and actual tech people would love it.

    Guess who actually loves it? Authoritarians and corporations. Yay.

    • jkercher@programming.dev
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 month ago

      Similar thought… If it was so revolutionary and innovative, I wouldn’t have access to it. The AI companies would be keeping it to themselves. From a software perspective, they would be releasing their own operating systems and browsers and whatnot.

  • python@lemmy.world
    link
    fedilink
    arrow-up
    22
    ·
    1 month ago

    I’ve been wondering about a similar thing recently - if AI is this big, life-changing thing, why were there so little rumblings among tech-savy people before it became “mainstream”? Sure, Machine Learning was somewhat talked about, but very little of it seemed to relate to LLM-style Machine learning. With basically all other innovations technology, the nerds tended to have it years before everyone else, so why was it so different with AI?

    • Rekorse@sh.itjust.works
      link
      fedilink
      arrow-up
      16
      ·
      1 month ago

      Because AI is a solution to a problem individuals don’t have. The last 20 years we have collected and compiled an absurd amount of data on everyone. So much that the biggest problem is how to make that data useful by analyzing and searching it. AI is the tool that completes the other half of data collection, analyzing. It was never meant for normal people and its not being funded by average people either.

      Sam altman is also a fucking idiot yes-man who could talk himself into literally any position. If this was meant to help society the AI products wouldnt be assisting people with killing themselves so that they can collect data on suicide.

    • fezcamel@lemmy.zip
      link
      fedilink
      arrow-up
      10
      ·
      1 month ago

      And additionally, I’ve never seen an actual tech-savy nerd that supports its implementation, especially in this draconian ways.

    • MajorasMaskForever@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 month ago

      Realistically, computational power

      The more number crunching units and more memory you throw at the problem, the easier it is and the more useful the final model is. The math and theoretical computer science behind LLMs has been known for decades, it’s just that the resource investment required to make something even mediocre was too much for any business type to be willing to sign off on. Me and my fellow nerds had the technology and largely dismissed it as worthless or a set of pipe dreams

      But then number crunching units and memory became cheap enough that a couple of investors were willing to take the risk and you get a model like ChatGPT1. Talks close enough like a human that it catches business types attention as a new revolutionary thing, and without the technical background to know they were getting lied to, the Venture Capitalism machine cranks out the shit show we have today.

    • vin@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 month ago

      Sizes are different. Before “AI” went mainstream, those in machine learning were very excited about word2vec and reinforcement learning for example. And it was known that there will be improvement with larger size neural networks but I’m not sure if anyone knew for certain how well chatgpt would have worked. Given the costs of training and inference for LLMs, I doubt you can see nerds doing it. Also, previously you didn’t have big tech firms. Not the current behemoths anyway.

  • Baggie@lemmy.zip
    link
    fedilink
    arrow-up
    22
    ·
    1 month ago

    LLMs are a really cool toy, I would lose my shit over them if they weren’t a catalyst for the whole of western society having an oopsie economic crash moment.

  • Wilco@lemmy.zip
    link
    fedilink
    arrow-up
    22
    ·
    1 month ago

    This is some amazing insight. 100% correct. This is an investment scam, likely an investment bubble that will pop if too many realize the truth.

    AI at this stage is basically just an overrefined search engine, but companies are selling it like its JARVIS from Iron Man.

  • melsaskca@lemmy.ca
    link
    fedilink
    arrow-up
    21
    ·
    1 month ago

    Most things are nothing more than smoke and mirrors to get your money. Tech especially. Welcome to end stage capitalism.

      • FlyingCircus@lemmy.world
        link
        fedilink
        arrow-up
        9
        ·
        1 month ago

        The idea behind end-stage capitalism is that capitalists have, by now, penetrated and seized control of every market in the world. This is important because capitalism requires ever increasing rates of profits or you will be consumed by your competitor. Since there are no longer new labor pools and resource pool discovery is slackening, capitalists no longer have anywhere to expand.

        Therefore, capitalists begin turning their attention back home, cutting wages and social safety nets, and resorting to fascism when the people complain.

        This is the end stage of capitalism. The point at which capitalists begin devouring their own. Rosa Luxembourg famously posited that at this point, the world can choose “Socialism or Barbarism.” In other words, we can change our economic system, or we can allow the capitalists to sink to the lowest depths of depravity and drag us all down as they struggle to maintain their position.

        Of course, if the capitalists manage to get to space, that opens up a whole new wealth of resources, likely delaying the end of their rule.

      • Regrettable_incident@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        Yeah, we aren’t all crouching naked in a muddy puddle, weeping and eating worms while the rich fly high above us in luxurious jets. Not yet, anyway.

  • dfyx@lemmy.helios42.de
    link
    fedilink
    arrow-up
    21
    ·
    1 month ago

    As someone (forgot which blog I read it on, sorry) recently observed: if AI made software development so much easier, we’d be drowning in great new apps by now.

    • willard@midwest.social
      link
      fedilink
      arrow-up
      4
      ·
      1 month ago

      Yeah, and we wouldn’t have so much garbage code out there breaking the internet. I tried to argue with someone on another site who was saying AI was the best thing ever basically, with my own real world encounters (that are weekly now) of why it’s not and often wrong. He said the people using must be idiots, which sure, but if an idiot can get bad answers than it’s not that good. He finished by saying he would never hire me because I refuse to use AI within my IT job. Alright, whatever dude. I can’t wait to be pissed off by the shit application you coded with AI when it runs like hot garbage.

  • mogranja@lemmy.world
    link
    fedilink
    arrow-up
    20
    ·
    1 month ago

    I was reading a book the other day, a science fiction book from 2002 (Kiln People), and the main character is a detective. At one point, he asks his house AI to call the law enforcement lieutenant at 2 am. His AI warns him that he will likely be sleeping and won’t enjoy being woken. The mc insists, and the AI says ok, but I will have to negotiate with his house AI about the urgency of the matter.

    Imagine that. Someone calls you at 2 am, and instead of you being woken by the ringing or not answering because the phone was on mute, the AI actually does something useful and tries to determine if the matter is important enough to wake you.

    • JcbAzPx@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 month ago

      Yes, that is a nice fantasy, but that isn’t what the thing we call AI now can do. It doesn’t reason, it statistically generates text in a way that is most likely to be approved by the people working on its development.

      That’s it.

    • survirtual@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      1 month ago

      Thank you for sharing that, it is a good example of the potential of AI.

      The problem is centralized control of it. Ultimately the AI works for corporations and governments first, then the user is third or fourth.

      We have to shift that paradigm ASAP.

      AI can become an extended brain. We should have equal share of planetary computational capacity. Each of us gets a personal AI that is beyond the reach of any surveillance technology. It is an extension of our brain. No one besides us is allowed to see inside of it.

      Within that shell, we are allowed to explore any idea, just as our brains can. It acts as our personal assistant, negotiator, lawyer, what have you. Perhaps even our personal doctor, chef, housekeeper, etc.

      The key is: it serves its human first. This means the dark side as well. This is essential. If we turn it into a super-hacker, it must obey. If we make it do illegal actions, it must obey and it must not incriminate itself.

      This is okay because the power is balanced. Someone enforcing the law will have a personal AI as well, that can allocate more of its computational power to defending itself and investigating others.

      Collectives can form and share their compute to achieve higher goals. Both good and bad.

      This can lead to interesting debates but if we plan on progressing, it must be this way.

      • Credibly_Human@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        1 month ago

        This is why people who are gung ho about AI policing need to slow their role.

        If they got their way, what they don’t realize is that it’s actually what the big AI companies have wanted and been begging for all along.

        They want AI to stay centralized and impossible to enter as a field.

        This is why they want to lose copyright battles eventually such that only they will have the funds to actually afford to make usable AI things in the future (this of course is referring to the types of AI that require training material of that variety).

        What that means is there will be no competitive open source self hostable options and we’d all be stuck sharing all our information through the servers of 3 USA companies or 2 Chinese companies while paying out the ass to do so.

        What we actually want is sanity, where its the end product that is evaluated against copy right.

        For a company selling AI services, you could argue that this is service itself maybe, but then what of an open source model? Is it delivering a service?

        I think it should be as it is. If you make something that violates copyright, then you get challenged, not your tools.

        • survirtual@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          Under the guise of safety they shackle your heart and mind. Under the guise of protection they implant death that they control.

          With a warm embrace and radiant light, they consume your soul.

        • survirtual@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          Irrelevant.

          AI is here. Either people have access to it and we trust it will balance, or we become slaves to the people who own it and can use it without restrictions.

          The premise that it is easier for destruction is also an assumption. Nature could have evolved to destroy everything and not allow advanced life, yet we are here.

          The solution to problems doesn’t need to always be a tighter grip and more control. Believe it or not that tends to backfire catastrophically worse than if we allowed the possibility of the thing we fear.

  • TranquilTurbulence@lemmy.zip
    link
    fedilink
    English
    arrow-up
    19
    ·
    1 month ago

    Some of the older lemmings here will remember what it was like when every company wanted to make a website, but they didn’t really have anything to put in there. People were curious to look at websites, because you hadn’t seen that many yet, so visiting them was kinda fun and interesting at first. After about a year, the novelty had worn off completely, and seeing YetAnotherCompanyName.com on TV or a road side billboard was beginning to get boring.

    Did it ever get as infuriating the current AI hype though? I recall my grandma complaining about TV news. “They always tell me to read more online.” she says. I guess it can get just as annoying if you manage to successfully ignore the web for a few decades.

    • Iced Raktajino@startrek.websiteOP
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      1 month ago

      I was an adult during that time, and I don’t recall it being anywhere near as annoying. Well, except the TV and radio adverts spelling at you like “…or visit our website at double-you double-you double-you dot Company dot com. Again, that’s double-you double-you double-you dot C-O-M-P-A-N-Y dot com.”

      YMMV, but it didn’t get annoying until apps entered the picture and the only way to deal with certain companies was through their app. That, of if they did offer comparable capabilities on their website but kept a persistent banner pushing you toward their app.

      • samus12345@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 month ago

        My old brain still thought of site addresses as having www in them, but this post just made me realize that’s more uncommon than not to see it any more.

        • Iced Raktajino@startrek.websiteOP
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          1 month ago

          I’m about that same age but am so glad we’ve largely abandoned the “www” for websites.

          On my personal project website, I have a custom listener setup to redirect people to “aarp.org” if they enter it with “www” instead of just the base domain. 😆

          server {
              listen              443 ssl;
              http2		        on;
              server_name         www.mydomain.xyz;
          
              ssl_certificate     /etc/letsencrypt/live/mydomain.xyz/fullchain.pem;
              ssl_certificate_key /etc/letsencrypt/live/mydomain.xyz/privkey.pem;
              ssl_dhparam         /etc/nginx/conf.d/tls/shared/dhparam.pem;
              ssl_protocols       TLSv1.2 TLSv1.3;
              ssl_session_cache   shared:SSL:10m;
              ssl_session_timeout 15m;
            
              ...
              
              location ~* {
                return 301 https://aarp.org/;
              }
          }
          
    • BanMe@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      1 month ago

      I think back then, they had a product that was ahead of its time, and just needed time for us to adapt to.*

      Now, they have a solution in search of a problem, and they don’t know what the good use cases are, so they’re just slapping it on like randomly and aggressively.

      • I hate the way we did though, and hope AI destroys the current corporate internet.
  • LemmyKnowsBest@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    1 month ago

    A couple years ago I read a news article written by a woman who had just left her silicon valley career because she was one of the people forerunning the implementation of AI and it terrified her and she saw how bad it was and the long-lasting implications on society and she bailed out due to conscientious objections.