In this video I discuss how generative AI technology has grown far past the governments ability to effectively control it and how the current legislative measures could lead to innocent people being jailed.

  • @Mr_Blott@lemmy.world
    link
    fedilink
    641 year ago

    If you want to be taken seriously about child abuse, have you tried not having thumbnails that look like a ten-year-old made them 😂

  • While lolicon is absolutely disgusting, its not actually csam. Legislation won’t work either and is honestly a waste of time. Any effort spent protecting digital children should instead be spent protecting real ones.

    • MuchPineapples
      link
      fedilink
      251 year ago

      The problem is that it’s not just cartoon characters, but also realistic looking people. That makes it, especially in the next years when the techniques improve, impossible to know what is fake and what is not and thus the fake ones should also be banned. And these models are trained on images of actual abused children, which of course is the main problem with this.

        • @Microw@lemm.ee
          link
          fedilink
          131 year ago

          It wouldnt surprise me tbh. From my superficial visit to the darknet years ago, it seemed like these csam consumers have specific “favourites” among the victims whom they want to see more of. At least that’s what I remember from clicking a link to such a chan and noping out of it.

    • Neato
      link
      fedilink
      251 year ago

      Prove it’s fake when some of it of your daughter is making it’s way around school.

      You’ve missed the point. Fake or not it does damage to people. And eventually it won’t be possible to determine if it’s real or not.

      • @hydration9806@lemmy.ml
        link
        fedilink
        161 year ago

        When that becomes widespread, photos will be generateable for literally everyone, not just minors but every person with photos online. It will be a societal shift; images will be assumed to be AI generated, making any guilt or shame about a nude photo existing obselete.

        • Neato
          link
          fedilink
          51 year ago

          What a disguising assumption. And the best argument against AI I’ve ever heard.

          • @hydration9806@lemmy.ml
            link
            fedilink
            121 year ago

            I mean, anyone with enough artistic talent can draw whatever they would like right now. With AI image generation, it essentially just gives everyone the ability to draw whatever they want. You can try to fight the tech all you want, but it’s a losing battle.

      • Ignotum
        link
        fedilink
        111 year ago

        AI generated porn depicting real people seems like a different and much bigger issue

        AI generated CSAM in general, while disgusting, at least doesn’t directly harm people, fabricated nudes most definitely does, regardless of the age of the victim

        • Neato
          link
          fedilink
          61 year ago

          You just implied children aren’t real people.

          • Ignotum
            link
            fedilink
            51 year ago

            AI generated nudes of noone in particular isn’t hurting anyone, not directly at least, but AI generated nudes of a specific person, using that persons likeness and everything, that’s much worse

            AI can generate faces of people that don’t actually exist, that’s what i mean

            The post made it seem like it was about AI generated CSAM in general, which while disgusting, doesn’t directly harm anyone, but then the comments spoke about AI generated CSAM depicting a real individual, and that’s much worse, but also not a problem that’s specific to children

            • Neato
              link
              fedilink
              41 year ago

              AI CSAM is incredibly harmful. All CSAM is harmful. It’s been shown to increase chance of pedophilic abuse.

              Stop defending CSAM, HOLY SHIT.

              • Helix 🧬
                link
                fedilink
                English
                10
                edit-2
                1 year ago

                It’s been shown to increase chance of pedophilic abuse.

                Can you link me a source for that, please?

              • Ignotum
                link
                fedilink
                51 year ago

                Jeez, calm down

                I am not defending CSAM, just saying that CSAM depicting an actual existing child is magnitudes worse, as is any other kind of fabricated sexual content of real people.

                Take loli porn for example, it’s probably bad for society, but if someone makes loli porn based on the appearance of an actual individual, that’s much more fucked up, and in addition to the “normal” detrimental effects, that would also harm that victim in a much more direct way.

              • Ignotum
                link
                fedilink
                11 year ago

                Currently pedos tend to group up and share real csam, and these “communities” probably serves to normalize the activities for the members, perhaps being able to generate it will keep pedos from clumping together, reducing the degree of normalization so they’re more likely to seek help, and as a bonus, real children aren’t preyed upon to create said csam?

                And saying that removing ai tools that can generate csam will lead to them “attempt to fuck children in the streets” as you say, would you also say that we should stop criminalizing the distributing existing csam, because the existing csam that is shared in paedophile circles is all that is keeping them from going out and raping children?

    • @pixeltree@lemmy.world
      link
      fedilink
      4
      edit-2
      1 year ago

      What data is it trained on? This isn’t meant to be a “gotcha” question, I’m wondering about it.

  • mo_ztt ✅
    link
    fedilink
    English
    20
    edit-2
    1 year ago

    What the hell is this guy?

    “Here’s a case where people made and shared fake nudes of real underage girls, doing harm to the girls”

    “But what the hell, that’s kind of hard to stop. Oh also here’s this guy who went to prison for it because it’s already illegal.”

    “Really the obvious solution everyone’s missing is: If you’re a girl in the world, just keep images of yourself off the internet”

    “Problem solved. Right?”

    I’m only slightly exaggerating.

    • spezOP
      link
      fedilink
      English
      51 year ago

      He is a deepfake of luke smith.

    • spezOP
      link
      fedilink
      English
      41 year ago

      Also, I think the most governments would be able to do is to increase the friction of this process by giving all ai-gen photos an ‘id’ to track later and probably controlling open-source models, but that’s harder to do. Most probably old senators who don’t know gmail will pass unenforceable laws which won’t do jackshit but get them votes.

      • mo_ztt ✅
        link
        fedilink
        English
        11
        edit-2
        1 year ago

        The point I’m trying to make is, you don’t even have to do that.

        There are already laws against revenge porn and realistic child porn. You don’t have to “prevent” this stuff from happening. That is, as he accurately points out, more or less impossible. But, if it happens you can absolutely do an investigation, and if you can find out who did it, you can put them in jail. That to me sounds like a pretty good solution and I’m still waiting to hear what his issue is with it.

        • spezOP
          link
          fedilink
          English
          11 year ago

          I don’t have any problems with the points you discussed either. Can’t speak for him though.

  • andrew_bidlaw
    link
    fedilink
    201 year ago

    Creating, collecting and sharing CSAM is in the law already. There are orgs and agencies for tracking and prosecuting these violations.

    It’s like fighting against 3d printers because you can make yourself a diy gun, a thing that have never being possible before because we got all pipes banned from hardware stores. The means to produce fictional CSAM always existed and would exist, the problem is with people who use a LMM, a camera, a fanfic to create and share that content. Or a Lemmy community that was a problem in recent months.

    It’s better to ensure the existing means of fighting such content are effective and society is educated about this danger, know how to avoid and report it.

  • @CJOtheReal@lemmy.sdf.org
    link
    fedilink
    181 year ago

    Loli stuff isn’t CSAM. You can find it bad, but its still just a drawing/generative image. No real person was harmed in general.

      • @CJOtheReal@lemmy.sdf.org
        link
        fedilink
        61 year ago

        You know, loli can also just mean flat chest and young looking, it doesn’t mean its portraying a actual child… And nope. Therefore you can’t guarantee shit you pull out of your ass. Many of those watching such stuff find actual children very disgusting.

        • asudox
          link
          fedilink
          11 year ago

          I am not sure how a person that looks like a child and has a very childish voice can only be a “young looking” adult.

  • meseek #2982
    link
    fedilink
    131 year ago

    Me: I just want real looking dinosaurs with cool, long flowing hair.

  • Neato
    link
    fedilink
    81 year ago

    Most of this thread is defending csam, which loli definitely is. WTF. Disgusting community.

    • I think you’re confused. No one is defending CSAM. Lolicon isn’t CSAM. Also I don’t understand why we would spend effort protecting digital children instead of protecting real ones.

      • @limitedduck
        link
        91 year ago

        Nobody is protecting digital children and it’s almost always disingenuous when this argument is claimed to be made. The effort is to stop the normalization of the sexualization children. Lolicon is exclusively about romancing or sexualizing children. Deluded adults who think what happens in lolicon material is ok are potential risks to real children. Allowing such a risk to children for the pleasure of these adult is absurd.

          • @limitedduck
            link
            11 year ago
            1. The amount of people warped by COD or Lolicon is not 100%, but it’s certainly not 0%
            2. It sounds like you haven’t actually played COD because the game is about WARFARE, not domestic terrorism. Maybe ask people who joined the US military how inspired they were by the game
        • Fair enough. Imo lolicon is disgusting. And Im not making an argument in bad faith, I just see how much general society fails at protecting children and would rather see any effort spent towards cracking down on lolicon to be used to help real children.

          • @limitedduck
            link
            21 year ago

            I understand what you’re saying, but the fighting against Lolicon doesn’t necessarily take away from the fight against real CSAM. The reality is serious, far-reaching, and, ultimately, human issues like the exploitation of children are complex and require effort on multiple fronts to be effective.

  • @andruid@lemmy.ml
    link
    fedilink
    51 year ago

    Couldn’t the fact that AI generated content be reproduceable if give the exact parameters(or coordinates in latent space) and model help remove the confusion? Include those as meta data and train investigators on how to use to distinguish generated content from actual evidence.

    • There’s an option to speed up generation but it will make it less deterministic, like in it’s 98% the same image but a little different. Also it’s very hard to reproduce the same hard and software generation. That’s the first issue.

      The second is: I had examples of images with generation data, that I could reproduce to look 99% like the original and then just updating a single word or part of the training data (different Lora version for example) , switched the person away or their appearance changed a completely. (Imagine a picture of a street and a car is suddenly not there, or it’s blue instead of red). It will make reproducibility not a reliable option. Backgrounds of images are even less reliable than the focus object.

  • spezOP
    link
    fedilink
    English
    21 year ago

    What do you people think this will lead to? Is it solvable or not? and if yes then how?