• BroBot9000@lemmy.world
    link
    fedilink
    English
    arrow-up
    148
    ·
    edit-2
    6 months ago

    Do you really need to have a list of why people are sick of LLM and Ai slop?

    Ai is literally making people dumber:

    https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf

    https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/

    They are a massive privacy risk:

    https://www.youtube.com/watch?v=AyH7zoP-JOg&t=3015s

    https://theconversation.com/ai-tools-collect-and-store-data-about-you-from-all-your-devices-heres-how-to-be-aware-of-what-youre-revealing-251693

    Are being used to push fascist ideologies into every aspect of the internet:

    https://newsocialist.org.uk/transmissions/ai-the-new-aesthetics-of-fascism/

    And they are a massive environmental disaster:

    https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117

    https://www.forbes.com/sites/cindygordon/2024/02/25/ai-is-accelerating-the-loss-of-our-scarcest-natural-resource-water/

    Stop being a corporate apologist and stop wreaking the environment with this shit technology.

    Edit: thank you to every Ai apologist outing themselves in the comments. Thank you for making blocking you easy.

    • lmmarsano@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      20
      ·
      6 months ago

      Do you really need to have a list of why people are sick of LLM and Ai slop?

      With the number of times that refrain is regurgitated here ad nauseum, need is an odd way to put it. Sick of it might fit sentiments better. Done with this & not giving a shit is another.

        • lmmarsano@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          OK, but you’re just making yourselves lolcows at this point where you announce these easy-to-push buttons & people derive joy from pushing them. Imitating AI just to troll is a thing now.

          So…that’s a victory?

    • AnonomousWolf@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      edit-2
      6 months ago

      If you ever take a flight for holiday, or even drive long distance and cry about AI being bad for the environment then you’re a hypocrite.

      Same goes for if you eat beef, or having a really powerful gaming rig that you use a lot.

      There are plenty of valid reasons AI is bad, but the argument for the environment seems weak, and most people using it are probably hypocrites. It’s barely a drop in the bucket compared to other things

      • BroBot9000@lemmy.world
        link
        fedilink
        English
        arrow-up
        29
        ·
        edit-2
        6 months ago

        Ahh so are you going to acknowledge the privacy invasion and brain rotting cause by Ai or are you just going to focus on dismissing the environmental concerns? Cause I linked more than just the environmental impacts.

      • Jankatarch@lemmy.world
        link
        fedilink
        arrow-up
        25
        ·
        6 months ago

        Texas has just asked residents to take less showers while datacenters made specifically for LLM training continue operating.

        This is more like feeling bad for not using a paper straw while local factory dumps all their oil change into the community river.

        • AnonomousWolf@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          6 months ago

          Maybe they should cut down on Beef first, it uses exponentially more water than AI and CO2

          • 1 kg Beef = 60kg CO2 - source
          • 1000km Return flight = 314kg CO2 - source
          • 1 Bitcoin transaction = 645kg of CO2 - source
          • 1000 AI prompts = 3kg of CO2 - source
          • commie@lemmy.dbzer0.comBanned from community
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            your source about beef relies on poore-nemecek 2018, a paper with dubious methodology

        • CXORA@aussie.zone
          link
          fedilink
          English
          arrow-up
          14
          ·
          6 months ago

          When someone disagrees with me - echo chamber.

          When someone agrees with me - logical discussion.

          • Sl00k@programming.dev
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 months ago

            Then why are you guys avoiding a logical discussion around environmental impact instead of spouting misinformation?

            The fact of the matter is eating a single steak or lb of ground beef will eclipse all most peoples AI usage. Obviously most can’t escape driving, but for those of us in cities biking will far eclipse your environmental impact than not using AI.

            Serving AI models aren’t even as bad as watching Netflix, this counterculture to AI is largely misdirected anger that thrown towards unregulated capitalism. Unregulated data centers. Unregulated growth.

            Training is bad but training is a small piece of the puzzle that happens infrequently, and again circles back to the unregulated problem.

            • CXORA@aussie.zone
              link
              fedilink
              English
              arrow-up
              3
              ·
              6 months ago

              It is easier to oppose a new thing than change ingrained habits.

              If your house is on fire, it is reasonable to be mad at someone who throws a little torch onto it.

      • oatscoop@midwest.social
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        6 months ago

        Weird … It looks like there’s nothing stopping me from signing up for an account on dbzer0 even though I’m not actually an anarchist.

    • FauxLiving@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      6 months ago

      Do you really need to have a list of why people are sick of LLM and Ai slop?

      We don’t need a collection of random ‘AI bad’ articles because your entire premise is flawed.

      In general, people are not ‘sick of LLM and Ai slop’. Real people, who are not chronically online, have fairly positive views of AI and public sentiment about AI is actually becoming more positive over time.

      Here is Stanford’s report on the public opinion regarding AI (https://hai.stanford.edu/ai-index/2024-ai-index-report/public-opinion).

      Stop being a corporate apologist and stop wreaking the environment with this shit technology.

      My dude, it sounds like you need to go out into the environment a bit more.

      • mojofrododojo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        6 months ago

        My dude, it sounds like you need to go out into the environment a bit more.

        oh you have a spare ecosystem in the closet for when this one is entirely fucked huh? https://www.npr.org/2024/09/11/nx-s1-5088134/elon-musk-ai-xai-supercomputer-memphis-pollution

        stop acting like it’s a rumor. the problem is real, it’s already here, they’re already crashing to build the data centers - so what, we can get taylor swift grok porn? nothing in that graph supports your premise either.

        That’s stanford graph is based on queries from 2022 and 2023 - it’s 2025 here in reality. Wake up. Times change.

        • FauxLiving@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          6 months ago

          That’s stanford graph is based on queries from 2022 and 2023 - it’s 2025 here in reality. Wake up. Times change

          Objective polling shows attitudes about AI were improving. Do you have any actual evidence to support your implication that this is no longer the case?

          Being self-righteous, rude and abrasive doesn’t mean you’re correct.

          • mojofrododojo@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            You disregard everyone else’s evidence but expect us to embrace your two year old data.

            you disregard what mental health experts are saying this is doing to actual people.

            You callously disregard the wellbeing of others for the benefit of aibros. Just because you’re ignoring the evidence doesn’t mean you’re correct numpty. Being willfully ignorant of the harms caused to the environment from this just tells me you’re profiting off of it, or a fanboy.

      • mojofrododojo@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 months ago

        We don’t need a collection of random ‘AI bad’ articles because your entire premise is flawed.

        god forbid you have evidence to support your premise. huh.

    • Electricd@lemmybefree.net
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      6 months ago

      Ai is literally making people dumber: https://www.microsoft.com/en-us/research/wp-content/uploads/2025/01/lee_2025_ai_critical_thinking_survey.pdf

      We surveyed 319 knowledge workers who use GenAI tools (e.g., ChatGPT, Copilot) at work at least once per week, to model how they enact critical thinking when using GenAI tools, and how GenAI affects their perceived effort of thinking critically. Analysing 936 real-world GenAI tool use examples our participants shared, we find that knowledge workers engage in critical thinking primarily to ensure the quality of their work, e.g. by verifying outputs against external sources. Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving. Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort. When using GenAI tools, the effort invested in critical thinking shifts from information gathering to information verification; from problem-solving to AI response integration; and from task execution to task stewardship. Knowledge workers face new challenges in critical thinking as they incorporate GenAI into their knowledge workflows. To that end, our work suggests that GenAI tools need to be designed to support knowledge workers’ critical thinking by addressing their awareness, motivation, and ability barriers.

      I would not say “can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving” equals to “literally making people dumber”. A sample size of 319 isn’t really representative anyways, and they mainly had a sample of a specific type of people. People switch from searching to verifying, which doesn’t sound too bad if done correctly. They associate critical thinking with verifying everything (“Higher confidence in GenAI’s ability to perform a task is related to less critical thinking effort”), not sure I agree on this.

      This study is also only aimed at people working instead of regular use. I personally discovered so many things with GenAI, and know to always question what the model says when it comes to specific topics or questions, because they tend to hallucinate. You could also say internet made people dumber, but those who know how to use it will be smarter.

      https://www.theregister.com/2025/06/18/is_ai_changing_our_brains/

      They had to write an essay in 20 minutes… obviously most people would just generate the whole thing and fix little problems here and there, but if you have to think less because you’re just fixing stuff instead on inventing… well yea, you use your brain less. Doesn’t make you dumb? It’s a bit like saying paying by card makes you dumber because you use less of your brain compared to paying in cash because you have to count how much you need to give, and how much you need to get back.

      Yes, if you get helped by a tool or someone, it will be less intensive for your brain. Who would have thought?!

    • Electricd@lemmybefree.net
      link
      fedilink
      arrow-up
      3
      ·
      6 months ago

      Are being used to push fascist ideologies into every aspect of the internet:

      Everything can be used for that. If anything, I believe AI models are too restricted and tend not to argue on controversial subjects, which prevents you from learning anything. Censorship sucks

    • Electricd@lemmybefree.net
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      6 months ago

      They are a massive privacy risk:

      I do agree on this, but at this point everyone uses instagram, snapchat, discord and whatever to share their DMs which are probably being sniffed by the NSA and used by companies for profiling. People are never going to change.

  • Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    71
    ·
    6 months ago

    The problem isn’t AI. The problem is Capitalism.

    The problem is always Capitalism.

    AI, Climate Change, rising fascism, all our problems are because of capitalism.

    • Ofiuco@piefed.ca
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      6 months ago

      Can’t delete this old-ass comment because the fediverse is so free it forces me not to delete it.
      Anyway, don’t care, still think the root of the problem are humans, and we will ruin whatever system is in place.
      Even if lemmy users want to blindly believe switching from capitalism will be the fix to every single problem.

      • zeca@lemmy.ml
        link
        fedilink
        arrow-up
        16
        ·
        6 months ago

        Problems would exist in any system, but not the same problems. Each system has its set of problems and challenges. Just look at history, problems change. Of course you can find analogies between problems, but their nature changes with our systems. Hunger, child mortality, pollution, having no free time, war, censorship, mass surveilence,… these are not constant through history. They happen more or less depending on the social systems in place, which vary constantly.

      • Eldritch@piefed.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        6 months ago

        While you aren’t wrong about human nature. I’d say you’re wrong about systems. How would the same thing happen under an anarchist system? Or under an actual communist (not Marxist-Leninist) system? Which account for human nature and focus to use it against itself.

          • pebbles@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            6
            ·
            6 months ago

            I think you are underestimating how adaptable humans are. We absolutely conform to the systems that govern us, and they are NOT equally likely to produce bad outcomes.

            • JargonWagon@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              6 months ago

              Every system eventually ends with someone corrupted with power and greed wanting more. Putin and his oligrachs, Trump and his oligarchs… Xi isn’t great, but at least I haven’t heard news about the Uyghurs situation for a couple of years now. Hope things are better there nowadays and people aren’t going missing anymore just for speaking out against their government.

              • pebbles@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                4
                ·
                6 months ago

                I mean you’d have to be pretty smart to make the perfect system. Things failing isn’t proof that things can’t be better.

              • Ceedoestrees@lemmy.worldBanned
                link
                fedilink
                arrow-up
                1
                ·
                edit-2
                6 months ago

                Time doesn’t end with corrupt power, those are just things that happen. Bad shit always happens, it’s the Why, How Often and How We Fix It that are more indicative of success. Every machine breaks down eventually.

        • Ace T'Ken@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          I’ll answer. Because some people see these systems as “good” regardless of political affiliation and want them furthered and see any cost as worth it. If an anarchist / communist sees these systems in a positive light, then they will absolutely try and use them at scale. These people absolutely exist and you could find many examples of them on Lemmy. Try DB0.

          • Eldritch@piefed.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 months ago

            And the point of anarchist or actual communist systems is that such scale would be miniscule. Not massive national or unanswerable state scales.

            And yes, I’m an anarchist. I know DB0 and their instance and generally agree with their stance - because it would allow any one of us to effectively advocate against it if we desired to.

            There would be no tech broligarchy forcing things on anyone. They’d likely all be hanged long ago. And no one would miss them as they provide nothing of real value anyway.

            • Ace T'Ken@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 months ago

              DB0 has a rather famous record of banning users who do not agree with AI. See !yepowertrippinbastards@lemmy.dbzer0.com or others for many threads complaining about it.

              You have no way of knowing what the scale would be as it’s all a thought experiment, however, so let’s play at that. if you see AI as a nearly universal good and want to encourage people to use it, why not incorporate it into things? Why not foist it into the state OS or whatever?

              Buuuuut… keep in mind that in previous Communist regimes (even if you disagree that they were “real” Communists), what the state says will apply. If the state is actively pro-AI, then by default, you are using it. Are you too good to use what your brothers and sisters have said is good and will definitely 100% save labour? Are you wasteful, Comrade? Why do you hate your country?

              • Eldritch@piefed.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                edit-2
                6 months ago

                Yes, I have seen posts on it. Sufficed to say, despite being an anarchist. I don’t have an account there for reasons. And don’t agree with everything they do.

                The situation with those bans I might consider heavy handed and perhaps overreaching. But by the same token it’s a bit of a reflection of some of those that are banned. Overzealous and lacking nuance etc.

                The funny thing is. They pretty much dislike the tech bros as much as anyone here does. You generally won’t ever find them defending their actions. They want AI etc that they can run from their home. Not snarfing up massive public resources, massively contributing to climate change, or stealing anyone’s livelihood. Hell many of them want to run off the grid from wind and solar. But, as always happens with the left. We can agree with eachother 90%, but will never tolerate or understand because of the 10%.

                PS

                We do know the scale. Your use of “the state” with reference to anarchism. Implies you’re unfamiliar with it. Anarchism and communism are against “the state” for the reasons you’re also warry of it. It’s too powerful and unanswerable.

      • chuckleslord@lemmy.world
        link
        fedilink
        arrow-up
        15
        ·
        6 months ago

        That’s a pathetic, defeatist world view. Yeah, we’re victims of our circumstances, but we can make the world a better place than what we were raised in.

      • Ceedoestrees@lemmy.worldBanned
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        6 months ago

        The fittest survive. The problem is creating systems where the best fit are people who lack empathy and and a moral code.

        A better solution would be selecting world leaders from the population at random.

  • Truscape@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    58
    ·
    edit-2
    6 months ago

    Distributed platform owned by no one founded by people who support individual control of data and content access

    Majority of users are proponents of owning what one makes and supporting those who create art and entertainment

    AI industry shits on above comments by harvesting private data and creative work without consent or compensation, along with being a money, energy, and attention tar pit

    Buddy, do you know what you’re here for?

    EDIT: removed bot accusation, forgot to check user history

    • dactylotheca@suppo.fi
      link
      fedilink
      English
      arrow-up
      39
      ·
      6 months ago

      Or are you yet another bot lost in the shuffle?

      Yes, good job, anybody with opinions you don’t like is a bot.

      It’s not like this was even a pro-AI post rather than just pointing out that even the most facile “ai bad, applause please” stuff will get massively upvoted

      • Truscape@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        8
        ·
        6 months ago

        Yeah, I guess that was a bit too far, posted before I checked the user history or really gave it time to sit in my head.

        Still, this kind of meme is usually used to imply that the comment is just a trend rather than a legitimate statement.

        • dactylotheca@suppo.fi
          link
          fedilink
          English
          arrow-up
          11
          ·
          6 months ago

          HaVe YoU ConSiDeReD thE PoSSiBiLiTY that I’m not pro-AI and I understand the downsides, and can still point out that people flock like lemmings (*badum tss*) to any “AI bad” post regardless of whether it’s actually good or not?

          • Doll_Tow_Jet-ski@fedia.io
            link
            fedilink
            arrow-up
            5
            ·
            6 months ago

            Ok, so your point is: Look! People massively agree with an idea that makes sense and it’s true.

            Color me surprised…

          • grrgyle@slrpnk.net
            link
            fedilink
            arrow-up
            3
            ·
            6 months ago

            Why would a post need to be good? It just needs a good point. Like this post is good enough, even if I don’t agree that we have enough facile ai = posts.

            Depends on the community, but for most of them pointing out ways that ai is bad is probably relevant, welcome, and typical.

        • Voyajer@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          6 months ago

          Why would you lend and credence to the weakest appeal to the masses presented on the site?

  • Rose@slrpnk.net
    link
    fedilink
    arrow-up
    47
    ·
    6 months ago

    The currently hot LLM technology is very interesting and I believe it has legitimate use cases. If we develop them into tools that help assist work. (For example, I’m very intrigued by the stuff that’s happening in the accessibility field.)

    I mostly have problem with the AI business. Ludicruous use cases (shoving AI into places where it has no business in). Sheer arrogance about the sociopolitics in general. Environmental impact. LLMs aren’t good enough for “real” work, but snake oil salesmen keep saying they can do that, and uncritical people keep falling for it.

    And of course, the social impact was just not what we were ready for. “Move fast and break things” may be a good mantra for developing tech, but not for releasing stuff that has vast social impact.

    I believe the AI business and the tech hype cycle is ultimately harming the field. Usually, AI technologies just got gradually developed and integrated to software where they served purpose. Now, it’s marred with controversy for decades to come.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 months ago

      If we develop them into tools that help assist work.

      Spoilers: We will not

      I believe the AI business and the tech hype cycle is ultimately harming the field.

      I think this is just an American way of doing business. And it’s awful, but at the end of the day people will adopt technology if it makes them greater profit (or at least screws over the correct group of people).

      But where the Americanized AI seems to suffer most is in their marketing fully eclipsing their R&D. People seem to have forgotten how DeepSeek spiked the football on OpenAI less than a year ago by making some marginal optimizations to their algorithm.

      The field isn’t suffering from the hype cycle nearly so much as it suffers from malinvestment. Huge efforts to make the platform marketable. Huge efforts to shoehorn clumsy chat bots into every nook and cranny of the OS interface. Vanishingly little effort to optimize material consumption or effectively process data or to segregate AI content from the human data it needs to improve.

  • RushLana@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    44
    ·
    6 months ago

    How people dare not like the automatic bullshit machine pushed down their troat…

    Seriously, genrative AI acomplishment are :

    • Making mass spam easier
    • Burning the planet
    • Making people lose their job and not even being a decent solution
    • Make all search engine and information sources worse
    • Creating an economic bubble that will fuckup the economy even harder
    • Easing mass surveillance and weakening privacy everywhere
    • Ek-Hou-Van-Braai@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      6 months ago

      One could have said many of the same thigs about a lot of new technologies.

      The Internet, Nuclear, Rockets, Airplanes etc.

      Any new disruptive technology comes with drawbacks and can be used for evil.

      But that doesn’t mean it’s all bad, or that it doesn’t have its uses.

      • RushLana@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        11
        ·
        6 months ago

        Give me one real world use that is worth the downside.

        As dev I can already tell you it’s not coding or around code. Project get spamed with low quality nonsensical bug repport, ai generated code rarely work and doesn’t integrate well ( on top on pushing all the work on the reviewer wich is already the hardest part of coding ) and ai written documentation is ridled with errors and is not legible.

        And even if ai was remotly good at something it still the equivalent of a microwave trying to replace the entire restaurant kitchen.

        • Ek-Hou-Van-Braai@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          6 months ago

          I can run a small LLM locally which I can talk to using voice to turn certain lights on and off, set reminders for me, play music etc.

          There are MANY examples of LLM’s being useful, it has its drawbacks just like any big technology, but saying it has no uses that aren’t worth it, is ridiculous.

          • PeriodicallyPedantic@lemmy.ca
            link
            fedilink
            arrow-up
            10
            ·
            edit-2
            6 months ago

            That’s like saying “asbestos has some good uses, so we should just give every household a big pile of it without any training or PPE”

            Or “we know leaded gas harms people, but we think it has some good uses so we’re going to let everyone access it for basically free until someone eventually figures out what those uses might be”

            It doesn’t matter that it has some good uses and that later we went “oops, maybe let’s only give it to experts to use”. The harm has already been done by eager supporters, intentional or not.

              • PeriodicallyPedantic@lemmy.ca
                link
                fedilink
                arrow-up
                3
                ·
                6 months ago

                It’s not a strawman, it’s hyperbole.

                There are serious known harms and we suspect that there are more.
                There are known ethical issues, and there may be more.
                There are few known benefits, but we suspect that there are more.

                Do we just knowingly subject untrained people to harm just to see if there are a few more positive usecases, and to make shareholders a bit more money?
                How does their argument differ from that?

          • RushLana@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 months ago

            But we could do vocal assistants well before LLMs (look at siri) and without setting everything on fire.

            And seriously, I asked for something that’s worth all the down side and you bring up clippy 2.0 ???

            Where are the MANY exemples ? why are LLMs/genAI company burning money ? where are the companies making use of of the suposedly many uses ?

            I genuily want to understand.

            • Ek-Hou-Van-Braai@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 months ago

              You asked for one example, I gave you one.

              It’s not just voice, I can ask it complex questions and it can understand context and put on lights or close blinds based on that context.

              I find it very useful with no real drawbacks

              • RushLana@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                4
                ·
                6 months ago

                I ask for an example making up for the downside everyone as to pay.

                so, no ! A better shutter puller or a maybe marginally better vocal assitant is not gonna cut it. And again that’s stuff siri and domotic tools where able to do since 2014 at a minimum.

                • Ek-Hou-Van-Braai@piefed.socialOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  6 months ago

                  Siri has privacy issues, and only works when connected to the internet.

                  What are the downsides of me running my own local LLM? I’ve named many benefits privacy being one of them.

              • JcbAzPx@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 months ago

                The fact that was the best you could come up with is far more damning than not even having one.

          • Rampsquatch@sh.itjust.works
            link
            fedilink
            arrow-up
            3
            ·
            6 months ago

            I can run a small LLM locally which I can talk to using voice to turn certain lights on and off, set reminders for me, play music etc.

            Neat trick, but it’s not worth the headache of set up when you can do all that by getting off your chair and pushing buttons. Hell, you don’t even have to get off your chair! A cellphone can do all that already, and you don’t even need voice commands to do it.

            Are you able to give any actual examples of a good use of an LLM?

            • Ek-Hou-Van-Braai@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              6 months ago

              Like it or not, that is an actual example.

              I can lay in my bed and turn off the lights without touching my phone, or turn on certain muisic without touching my phone.

              I could ask if I remembered to lock the front door etc.

              But okay, I’ll play your game, let’s pretend that doesn’t count.

              I can use my local AI to draft documents or emails speeding up the process a lot.

              Or I can used it to translate.

              • Rampsquatch@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                ·
                6 months ago

                If you want to live your life like that, go for that’s your choice. But I don’t think those applications are worth the cost of running an LLM. To be honest I find it frivolous.

                I’m not against LLMs as a concept, but the way they get shoved into everything without thought and without an “AI” free option is absurd. There are good reasons why people have a knee-jerk anti-AI reaction, even if they can’t articulate it themselves.

                • Ek-Hou-Van-Braai@piefed.socialOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  6 months ago

                  It’s not expensive for me to run a local LLM, I just use the hardware I’m already using for gaming. Electricity is cheap and most people with a gaming PC probably use more electricity gaming than they would running their own LLM and asking it some questions.

                  I’m also against shoving AI in evening, and not making it Opt-In. I’m also worried about privacy and concentration of power etc.

                  But just outright saying LLMs are bad is rediculous.

                  And saying there is no good reason to use them is rediculous. Can we stop doing that.

      • PeriodicallyPedantic@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        6 months ago

        Of those, only the internet was turned loose on an unsuspecting public, and they had decades of the faucet slowly being opened, to prepare.

        Can you imagine if after WW2, Werner Von Braun came to the USA and then just like… Gave every man woman and child a rocket, with no training? Good and evil wouldn’t even come into, it’d be chaos and destruction.

        Imagine if every household got a nuclear reactor to power it, but none of the people in the household got any training in how to care for it.

        It’s not a matter of good and evil, it’s a matter of harm.

        • Ek-Hou-Van-Braai@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 months ago

          The Internet kind of was turned lose on an unsuspecting public. Social media has and still is causing a lot of harm.

          Did you really compare every household having a nuclear reactor with people having access to AI?

          How’s is that even remotely a fair comparison.

          To me the Internet being released on people and AI being released on people is more of a fair comparison.

          Both can do lots of harm and good, both will probably cost a lot of people their jobs etc.

          • PeriodicallyPedantic@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            6 months ago

            You know that the public got trickle-fed the internet for decades before it was ubiquitous in everyone house, and then another decade before it was ubiquitous in everyone’s pocket. People had literal decades to learn how to protect themselves and for the job market to adjust. During that time, there was lots of research and information on how to protect yourself, and although regulation mostly failed to do anything, the learning material was adapted for all ages and was promoted.

            Meanwhile LLMs are at least as impactful as the internet, and were released to the public almost without notice. Research on it’s affects is being done now that it’s already too late, and the public doesn’t have any tools to protect itself. What meager material in appropriate use exist hasn’t been well researched not adapted to all ages, when it isn’t being presented as “the insane thoughts of doomer Luddites, not to be taken seriously” by the AI supporters.

            The point is that people are being handed this catastrophically dangerous tool, without any training or even research into what the training should be. And we expect everything to be fine just because the tool is easy to use and convenient?

            These companies are being allowed to bulldoze not just the economy, and the mental resilience of entire generations, for the sake of a bit of shareholder profit.

    • mechoman444@lemmy.world
      link
      fedilink
      arrow-up
      11
      ·
      6 months ago

      Yes. AI can be used for spam, job cuts, and creepy surveillance, no argument there, but pretending it’s nothing more than a corporate scam machine is just lazy cynicism. This same “automatic BS” is helping discover life-saving drugs, diagnosing cancers earlier than some doctors, giving deaf people real-time conversations through instant transcription, translating entire languages on the fly, mapping wildfire and flood zones so first responders know exactly where to go, accelerating scientific breakthroughs from climate modeling to space exploration, and cutting out the kind of tedious grunt work that wastes millions of human hours a day. The problem isn’t that AI exists, it’s that a lot of powerful people use it selfishly and irresponsibly. Blaming the tech instead of demanding better governance is like blaming the printing press for bad propaganda.

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        19
        ·
        6 months ago

        This same “automatic BS” is helping discover life-saving drugs, diagnosing cancers earlier than some doctors

        Not the same kind of AI. At all. Generative AI vendors love this motte-and-bailey.

      • atopi@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        ·
        6 months ago

        Arent those different types of AI?

        I dont think anyone hating AI is referring to the code that makes enemies move, or sort things into categories

        • mechoman444@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          6 months ago

          LLMs aren’t artificial intelligence in any way.

          They’re extremely complex and very smart prediction engines.

          The term artificial intelligence has been co-opted in hijacked for marketing purposes a long time ago.

          The kind of AI that in general people expect to see is a fully autonomous self-aware machine.

          If anyone has used any llm for any extended period of time they will know immediately that they’re not that smart even chatgpt arguably the smartest of them all is still highly incapable.

          What we do have to come to terms with is that these llms do have an application they have function and they are useful and they can be used in a deleterious way just like any technology at all.

          • atopi@piefed.blahaj.zone
            link
            fedilink
            English
            arrow-up
            1
            ·
            6 months ago

            If a program that can predict prices for video games based on reviews and how many people bought it can be called AI long before 2021, LLMs can too

      • RushLana@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        5
        ·
        6 months ago

        we should allow lead in paint its easier to use /s

        You are deliberatly missing my point which is : gen AI as an enormous amount of downside and no real world use.

  • ronigami@lemmy.world
    link
    fedilink
    arrow-up
    42
    ·
    6 months ago

    I mean, it is objectively bad for life. Throwing away millions to billions of gallons of water all so you can get some dubious coding advice.

  • bridgeenjoyer@sh.itjust.works
    link
    fedilink
    arrow-up
    39
    ·
    6 months ago

    Its true. We can have a nuanced view. Im just so fucking sick of the paid off media hyping this shit, and normies thinking its the best thing ever when they know NOTHING about it. And the absolute blind trust and corpo worship make me physically ill.

    • Honytawk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 months ago

      Nuance is the thing.

      Thinking AI is the devil, will kill your grandma and shit in your shoes is equally as dumb as thinking AI is the solution to any problem, will take over the world and become our overlord.

      The truth is, like always, somewhere in between.

  • Empricorn@feddit.nl
    link
    fedilink
    English
    arrow-up
    36
    ·
    edit-2
    6 months ago

    Whether intentional or not, this is gaslighting. “Here’s the trendy reaction those wacky lemmings are currently upvoting!”

    Getting to the core issue, of course we’re sick of AI, and have a negative opinion of it! It’s being forced into every product, whether it makes sense or not. It’s literally taking developer jobs, then doing worse. It’s burning fossil fuels and VC money and then hallucinating nonsense, but still it’s being jammed down our throats when the vast majority of us see no use-case or benefit from it. But feel free to roll your eyes at those acknowledging the truth…

  • rustydrd@sh.itjust.works
    link
    fedilink
    arrow-up
    36
    ·
    6 months ago

    Lots of AI is technologically interesting and has tons of potential, but this kind of chatbot and image/video generation stuff we got now is just dumb.

    • MrMcGasion@lemmy.world
      link
      fedilink
      arrow-up
      29
      ·
      edit-2
      6 months ago

      I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don’t understand the technology and see it as a way to get rich and powerful quickly.

      • NewDayRocks@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        6
        ·
        6 months ago

        AI is good and cheap now because businesses are funding it at a loss, so not sure what you mean here.

        The problem is that it’s cheap, so that anyone can make whatever they want and most people make low quality slop, hence why it’s not “good” in your eyes.

        Making a cheap or efficient AI doesn’t help the end user in any way.

        • SolarBoy@slrpnk.net
          link
          fedilink
          English
          arrow-up
          7
          ·
          6 months ago

          It appears good and cheap. But it’s actually burning money, energy and water like crazy. I think somebody mentioned to generate a 10 second video, it’s the equivalent in energy consumption as driving a bike for 100km.

          It’s not sustainable. I think the thing the person above you is referring to is if we ever manage to make LLMs and such which can be run locally on a phone or laptop with good results. That would make people experiment and try out things themselves, instead of being dependent on paying monthly for some services that can change anytime.

          • krunklom@lemmy.zip
            link
            fedilink
            arrow-up
            2
            ·
            6 months ago

            i mean. i have a 15 amp fuse in my apartment and a 10 second cideo takes like 10 minutes to make, i dont know how much energy a 4090 draws but anyone that has an issue with me using mine to generate a 10 second bideo better not play pc games.

          • NewDayRocks@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            You and OP are misunderstanding what is meant by good and cheap.

            It’s not cheap from a resource perspective like you say. However that is irrelevant for the end user. It’s “cheap” already because it is either free or costs considerably less for the user than the cost of the resources used. OpenAI or Meta or Twitter are paying the cost. You do not need to pay for a monthly subscription to use AI.

            So the quality of the content created is not limited by cost.

            If the AI bubble popped, this won’t improve AI quality.

        • MrMcGasion@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          6 months ago

          I’m using “good” in almost a moral sense. The quality of output from LLMs and generative AI is already about as good as it can get from a technical standpoint, continuing to throw money and data at it will only result in minimal improvement.

          What I mean by “good AI” is the potential of new types of AI models to be trained for things like diagnosing cancer, and and other predictive tasks that we haven’t thought of yet that actually have the potential to help humanity (and not just put artists and authors out of their jobs).

          The work of training new, useful AI models is going to be done by scientists and researchers, probably on a limited budgets because there won’t be a clear profit motive, and they won’t be able to afford thousands of $20,000 GPUs like are being thrown at LLMs and generative AI today. But as the current AI race crashes and burns, the used hardware of today will be more affordable and hopefully actually get used for useful AI projects.

          • NewDayRocks@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            Ok. Thanks for clarifying.

            Although I am pretty sure AI is already used in the medical field for research and diagnosis. This “AI everywhere” trend you are seeing is the result of everyone trying to stick and use AI in every which way.

            The thing about the AI boom is that lots of money is being invested into all fields. A bubble pop would result in investment money drying up everywhere, not make access to AI more affordable as you are suggesting.

      • FauxLiving@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames.

        I can’t imagine that you read much about AI outside of web sources or news media then. The exciting uses of AI is not LLMs and diffusion models, though that is all the public talks about when they talk about ‘AI’.

        For example, we have been trying to find a way to predict protein folding for decades. Using machine learning, a team was able to train a model (https://en.wikipedia.org/wiki/AlphaFold) to predict the structure of proteins with high accuracy. Other scientists have used similar techniques to train a diffusion model that will generate a string of amino acids which will fold into a structure with the specified properties (like how image description prompts are used in an image generator).

        This is particularly important because, thanks to mRNA technology, we can write arbitrary sequences of mRNA which will co-opt our cells to produce said protein.


        Robotics is undergoing similar revolutionary changes. Here is a state of the art robot made by Boston Dynamics using a human programmed feedback control loop: https://www.youtube.com/watch?v=cNZPRsrwumQ

        Here is a Boston Dynamics robot “using reinforcement learning with references from human motion capture and animation.”: https://www.youtube.com/watch?v=I44_zbEwz_w


        Object detection, image processing, logistics, speech recognition, etc. These are all things that required tens of thousands of hours of science and engineering time to develop the software for, and the software wasn’t great. Now, freshman at college can train a computer vision network that outperforms these tools using free tools and a graphics card which will outperform the human-created software.

        AI isn’t LLMs and image generators, those may as well be toys. I’m sure eventually LLMs and image generation will be good, but the only reason it seems amazing is because it is a novel capability that computers have not had before. But the actual impact on the real world will be minimal outside of specific fields.

            • mojofrododojo@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              6 months ago

              then pray tell where is it working out great?

              again, you have nothing to refute the evidence placed before you except “ah that’s a bunch of links” and “not everything is an llm”

              so tell us where it’s going so well.

              Not the meacha-hitler swiftie porn, heh, yeah I wouldn’t want to be associated with it either. But your aibros don’t care.

                • mojofrododojo@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 months ago

                  ah what great advances has alpha fold delivered?

                  and that robotics training, where has that improved human lives? because near as I can tell it’s simply going to put people out of work. the lowest paid people. so that’s just great.

                  but let’s give you some slack: let’s leave it to protein folding and robotics and stop sticking it into every fuckin facet of our civilization.

                  and protein folding and robotics training wouldn’t require google, x, meta and your grandmother to be rolling out datacenters EVERYWHERE, driving up the costs of electricity for the average user, while polluting the air and water.

                  Faux, I get it, you’re an aibro, you really are a believer. Evidence isn’t going to sway you because this isn’t evidence driven. The suffering of others isn’t going to bother you, that’s their problem. The damage to the ecosystem isn’t your problem, you apparently don’t need water or air to exist. You got it made bro.

                  pfft.

        • MrMcGasion@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          6 months ago

          Oh I have read and heard about all those things, none of them (to my knowledge) are being done by OpenAI, xAI, Google, Anthropic, or any of the large companies fueling the current AI bubble, which is why I call it a bubble. The things you mentioned are where AI has potential, and I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators. And sure, maybe some of those who are innovating end up getting bought by the larger companies, but that’s not as good for their start-ups or for humanity at large.

          • FauxLiving@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            AlphaFold is made by DeepMind, an Alphabet (Google) subsidiary.

            Google and OpenAI are also both developing world models.

            These are a way to generate realistic environments that behave like the real world. These are core to generating the volume of synthetic training data that would allow training robotics models massively more efficient.

            Instead of building an actual physical robot and having it slowly interact with the world while learning from its one physical body. The robot’s builder could create a world model representation of their robot’s body’s physical characteristics and attach their control software to the simulation. Now the robot can train in a simulated environment. Then, you can create multiple parallel copies of that setup in order to generate training data rapidly.

            It would be economically unfeasible to build 10,000 prototype robots in order to generate training data, but it is easy to see how running 10,000 different models in parallel is possible.

            I think that continuing to throw billions at marginally better LLMs and generative models at this point is hurting the real innovators.

            On the other hand, the billions of dollars being thrown at these companies is being used to hire machine learning specialists. The real innovators who have the knowledge and talent to work on these projects almost certainly work for one of these companies or the DoD. This demand for machine learning specialists (and their high salaries) drives students to change their major to this field and creates more innovators over time.

      • haungack@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        I don’t know if the current AI phase is a bubble, but i agree with you that if it were a bubble and burst, it wouldn’t somehow stop or end AI, but cause a new wave of innovation instead.

        I’ve seen many AI opponents imply otherwise. When the dotcom bubble burst, the internet didn’t exactly die.

  • Mostly_Gaming@lemmy.world
    link
    fedilink
    arrow-up
    32
    ·
    6 months ago

    I personally think of AI as a tool, what matters is how you use it. I like to think of it like a hammer. You could use a hammer to build a house, or you could smash someone’s skull in with it. But no one’s putting the hammer in jail.

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      arrow-up
      19
      ·
      6 months ago

      Yeah, except it’s a tool that most people don’t know how to use but everyone can use, leading to environmental harm, a rapid loss of media literacy, and a huge increase in wealth inequality due to turmoil in the job market.

      So… It’s not a good tool for the average layperson to be using.

    • oppy1984@lemdro.id
      link
      fedilink
      English
      arrow-up
      19
      ·
      edit-2
      6 months ago

      Seriously, the AI hate gets old fast. Like you said it’s a tool, gey get over it people.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      6 months ago

      “Guns don’t kill people, people kill people”

      Edit:

      Controversial reply, apparently, but this is literally part of the script to a Philosophy Tube video (relevant part is 8:40 - 20:10)

      We sometimes think that technology is essentially neutral. It can have good or bad effects, and it might be really important who controls it. But a tool, many people like to think, is just a tool. “Guns don’t kill people, people do.” But some philosophers have argued that technology can have values built into it that we may not realise.

      The philosopher Don Idhe says tech can open or close possibilities. It’s not just about its function or who controls it. He says technology can provide a framework for action.

      Martin Heidegger was a student of Husserl’s, and he wrote about the ways that we experience the world when we use a piece of technology. His most famous example was a hammer. He said when you use one you don’t even think about the hammer. You focus on the nail. The hammer almost disappears in your experience. And you just focus on the task that needs to be performed.

      Another example might be a keyboard. Once you get proficient at typing, you almost stop experiencing the keyboard. Instead, your primary experience is just of the words that you’re typing on the screen. It’s only when it breaks or it doesn’t do what we want it to do, that it really becomes visible as a piece of technology. The rest of the time it’s just the medium through which we experience the world.

      Heidegger talks about technology withdrawing from our attention. Others say that technology becomes transparent. We don’t experience it. We experience the world through it. Heidegger says that technology comes with its own way of seeing.

      Now some of you are looking at me like “Bull sh*t. A person using a hammer is just a person using a hammer!” But there might actually be some evidence from neurology to support this.

      If you give a monkey a rake that it has to use to reach a piece of food, then the neurons in its brain that fire when there’s a visual stimulus near its hand start firing when there’s a stimulus near the end of the rake, too! The monkey’s brain extends its sense of the monkey body to include the tool!

      And now here’s the final step. The philosopher Bruno Latour says that when this happens, when the technology becomes transparent enough to get incorporated into our sense of self and our experience of the world, a new compound entity is formed.

      A person using a hammer is actually a new subject with its own way of seeing - ‘hammerman.’ That’s how technology provides a framework for action and being. Rake + monkey = rakemonkey. Makeup + girl is makeupgirl, and makeupgirl experiences the world differently, has a different kind of subjectivity because the tech lends us its way of seeing.

      You think guns don’t kill people, people do? Well, gun + man creates a new entity with new possibilities for experience and action - gunman!

      So if we’re onto something here with this idea that tech can withdraw from our attention and in so doing create new subjects with new ways of seeing, then it makes sense to ask when a new piece of technology comes along, what kind of people will this turn us into.

      I thought that we were pretty solidly past the idea that anything is “just a tool” after seeing Twitler scramble Grok’s innards to advance his personal politics.

      Like, if you still had any lingering belief that AI is “like a hammer”, that really should’ve extinguished it.

      But I guess some people see that as an aberrant misuse of AI, and not an indication that all AI has an agenda baked into it, even if it’s more subtle.

      • Ignotum@lemmy.world
        link
        fedilink
        arrow-up
        11
        ·
        6 months ago

        My skull-crushing hammer that is made to crush skulls and nothing else doesn’t crush skulls, people crush skulls
        In fact, if more people had skull-crushing hammers in their homes, i’m sure that would lead to a reduction in the number of skull-crushings, the only thing that can stop a bad guy with a skull-crushing hammer, is a good guy with a skull-crushing hammer

      • imetators@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 months ago

        We once had played this game with friends where you get a word stuck on your forehead and you have to guess what are you.

        One guy got C4 (as in explosive) to guess and he failed. I remember that we had to agree with each other whether C4 is or is not a weapon. Main idea was that explosives are comparatively rarely used in actual killing opposed to other things like mining and such. Parallel idea was that is Knife a weapon?

        But ultimately we agreed that C4 is not a weapon. It was invented not primarily to to kill or injure. Opposed to guns, that are only for killing or injuring.

        Take guns away, people will kill with literally anything else. But give an easy access to guns, people will kill with them. Gun is not a tool, it is a weapon by design.

      • ilovepiracy@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        4
        ·
        6 months ago

        What about self hosting? I can run a local GenAI on my gaming PC with relative ease. This isn’t consuming mass amounts of power.

      • Ceedoestrees@lemmy.worldBanned
        link
        fedilink
        arrow-up
        1
        ·
        6 months ago

        And neither does AI? The massive data centers are having negative impacts on local economies, resources and the environment.

        Just like a massive hammer factory, mines for the metals, logging for handles and manufacturing for all the chemicals, paints and varnishes have a negative environmental impact.

        Saying something kills the planet by existing is an extreme hyperbole.

  • skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    28
    ·
    6 months ago

    The reason most web forum posters hate AI is because AI is ruining web forums by polluting it with inauthentic garbage. Don’t be treating it like it’s some sort of irrational bandwagon.

    • Ek-Hou-Van-Braai@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      6 months ago

      He’s made the World wake up to the fact that they can’t trust the US, so that can be seen as good?

      AI isn’t that black and white, just like any big technology it can be used for good or bad.

      Just like Airplanes

        • Ek-Hou-Van-Braai@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          6 months ago

          I used that comparison a total of two times (and might use it more), how about refute my argument instead of getting mad at me for using a good comparison twice.

          Airplanes emit SHITLOADS of carbon into the atmosphere, they have directly caused the death of tens of thousands of people. Airplanes are heavily used in war and to spy on people. Airplanes are literally used to spray pesticides and other chemicals into the air etc. They can mostly just be used by the rich etc.

          Just like with AI, there are many reasons airplanes are bad, that doesn’t mean we should get rid of them.

          • Ifera@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            6 months ago

            A based point of view, Bravo, my dear. Do you know how rare that is? People in here love to think about themselves as free thinkers, when a lot of them are in reality, reactionary at best.

            Same for citing renting, landlords and Ai. They are disgustingly evil when used for profit, but they also have their uses. In another comment I’m sure will be downvoted to hell, if not outright buried, I mention the uses of GenAI for translation, text simplification, summarization and studying, yet people got the whole “AI=BAD” as a thought-terminating cliche.

    • absentbird@lemmy.world
      link
      fedilink
      arrow-up
      20
      ·
      6 months ago

      When people say this they are usually talking about a very specific sort of generative LLM using unsupervised learning.

      AI is a very broad field with great potential, the improvements in cancer screening alone could save millions of lives over the coming decades. At the core it’s just math, and the equations have been in use for almost as long as we’ve had computers. It’s no more good or bad than calculus or trigonometry.

      • occultist8128@infosec.pub
        link
        fedilink
        arrow-up
        7
        ·
        6 months ago

        No hope commenting like this, just get ready getting downvoted with no reason. People use wrong terms and normalize it.

          • occultist8128@infosec.pub
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            6 months ago

            See I get the point of people hating what they call ‘AI’ here, I totally get it but I can’t see people using wrong terms since I know the correct one. The big corpos already misuse the term saying everything they made AI without specifying what kind of AI it is and people here that I assume techie also went to the wrong path (so you guys sounds the same as those evils, and u fell on the marketing). It’s not about whataboutism — it’s fixing what people always normalize using wrong terms when talking about technical stuff. I don’t care if you still don’t get it tho, I do what I can for saying the truth. And I don’t think you do know what ‘whataboutism’ really is.

          • KombatWombat@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            Providing a counterexample to a claim is not whataboutism.

            Whataboutism involves derailing a conversation with an ad-hominem to avoid addressing someone’s argument, like what you just did.

    • Sl00k@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 months ago

      Would love an explanation on how I’m in the wrong on reducing my work week from 40 hours to 15 using AI.

      Existing in predatory capitalistic system and putting the blame on those who utilize available tools to reduce the predatory nature of our system is insane.

        • Sl00k@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 months ago

          My employer is pushing AI usage, if the work is done the work is done. This is the reality we’re supposed to be living in with AI, just conforming to the current predatory system because “AI bad” actively harms more than it helps.

          • petrol_sniff_king@lemmy.blahaj.zone
            link
            fedilink
            arrow-up
            1
            ·
            6 months ago

            The current predatory system will raise the limit on the 40 work week if they’re allowed to. 60. 80. You might not even get a weekend. Unions fought for your weekend.

            AI does not fundamentally change this relationship. It is the same predatory system.