A judge in Washington state has blocked video evidence that’s been “AI-enhanced” from being submitted in a triple murder trial. And that’s a good thing, given the fact that too many people seem to think applying an AI filter can give them access to secret visual data.

  • @Downcount@lemmy.world
    link
    fedilink
    English
    171
    edit-2
    3 months ago

    If you ever encountered an AI hallucinating stuff that just does not exist at all, you know how bad the idea of AI enhanced evidence actually is.

    • Bobby Turkalino
      link
      fedilink
      English
      83 months ago

      Everyone uses the word “hallucinate” when describing visual AI because it’s normie-friendly and cool sounding, but the results are a product of math. Very complex math, yes, but computers aren’t taking drugs and randomly pooping out images because computers can’t do anything truly random.

      You know what else uses math? Basically every image modification algorithm, including resizing. I wonder how this judge would feel about viewing a 720p video on a 4k courtroom TV because “hallucination” takes place in that case too.

        • Bobby Turkalino
          link
          fedilink
          English
          73 months ago

          Both insert pixels that didn’t exist before, so where do we draw the line of how much of that is acceptable?

          • @Downcount@lemmy.world
            link
            fedilink
            English
            57
            edit-2
            3 months ago

            Look it this way: If you have an unreadable licence plate because of low resolution, interpolating won’t make it readable (as long as we didn’t switch to a CSI universe). An AI, on the other hand, could just “invent” (I know, I know, normy speak in your eyes) a readable one.

            You will draw yourself the line when you get your first ticket for speeding, when it wasn’t your car.

            • @Natanael@slrpnk.net
              link
              fedilink
              English
              8
              edit-2
              3 months ago

              License plates is an interesting case because with a known set of visual symbols (known fonts used by approved plate issuers) you can often accurately deblur even very very blurry text (but not with AI algorithms, but rather by modeling the blur of the cameras and the unique blur gradients this results in for each letter). It does require a certain minimum pixel resolution of the letters to guarantee unambiguity though.

            • Bobby Turkalino
              link
              fedilink
              English
              23 months ago

              Interesting example, because tickets issued by automated cameras aren’t enforced in most places in the US. You can safely ignore those tickets and the police won’t do anything about it because they know how faulty these systems are and most of the cameras are owned by private companies anyway.

              “Readable” is a subjective matter of interpretation, so again, I’m confused on how exactly you’re distinguishing good & pure fictional pixels from bad & evil fictional pixels

              • @Downcount@lemmy.world
                link
                fedilink
                English
                223 months ago

                Being tickets enforced or not doesn’t change my argumentation nor invalidates it.

                You are acting stubborn and childish. Everything there was to say has been said. If you still think you are right, do it, as you are not able or willing to understand. Let me be clear: I think you are trolling and I’m not in any mood to participate in this anymore.

                • Bobby Turkalino
                  link
                  fedilink
                  English
                  13 months ago

                  Sorry, it’s just that I work in a field where making distinctions is based on math and/or logic, while you’re making a distinction between AI- and non-AI-based image interpolation based on opinion and subjective observation

                  • pm_me_ur_thoughts
                    link
                    fedilink
                    English
                    113 months ago

                    Okay, I’m not disagreeing with you about the fact that its all math.

                    However, interpolation or pixels is simple math. AI generated is complex math and is only as good as its training data.

                    The licence example is a good one. In interpolation, it’ll just find some average, midpoint, etc and fill the pixel. In AI gen, if the training set had your number plate 999 times in a set of 1000, it will generate your numberplate no matter whose plate you input. to use it as evidence would need it to be far more deterministic than the probabilistic nature of AI gen content.

              • @abhibeckert@lemmy.world
                link
                fedilink
                English
                8
                edit-2
                3 months ago

                You can safely ignore those tickets and the police won’t do anything

                Wait what? No.

                It’s entirely possible if you ignore the ticket, a human might review it and find there’s insufficient evidence. But if, for example, you ran a red light and they have a photo that shows your number plate and your face… then you don’t want to ignore that ticket. And they generally take multiple photos, so even if the one you received on the ticket doesn’t identify you, that doesn’t mean you’re safe.

                When automated infringement systems were brand new the cameras were low quality / poorly installed / didn’t gather evidence necessary to win a court challenge… getting tickets overturned was so easy they didn’t even bother taking it to court. But it’s not that easy now, they have picked up their game and are continuing to improve the technology.

                Also - if you claim someone else was driving your car, and then they prove in court that you were driving… congratulations, your slap on the wrist fine is now a much more serious matter.

          • @Blackmist@feddit.uk
            link
            fedilink
            English
            19
            edit-2
            3 months ago

            I mean we “invent” pixels anyway for pretty much all digital photography based on Bayer filters.

            But the answer is linear interpolation. That’s where we draw the line. We have to be able to point to a line of code and say where the data came from, rather than a giant blob of image data that could contain anything.

          • @Catoblepas@lemmy.blahaj.zone
            link
            fedilink
            English
            23 months ago

            What’s your bank account information? I’m either going to add or subtract a lot of money from it. Both alter your account balance so you should be fine with either right?

      • @Catoblepas@lemmy.blahaj.zone
        link
        fedilink
        English
        383 months ago

        Has this argument ever worked on anyone who has ever touched a digital camera? “Resizing video is just like running it through AI to invent details that didn’t exist in the original image”?

        “It uses math” isn’t the complaint and I’m pretty sure you know that.

      • Flying Squid
        link
        fedilink
        English
        363 months ago

        normie-friendly

        Whenever people say things like this, I wonder why that person thinks they’re so much better than everyone else.

        • @Hackerman_uwu@lemmy.world
          link
          fedilink
          English
          53 months ago

          Tangentially related: the more people seem to support AI all the things the less it turns out they understand it.

          I work in the field. I had to explain to a CIO that his beloved “ChatPPT” was just autocomplete. He become enraged. We implemented a 2015 chatbot instead, he got his bonus.

          We have reached the winter of my discontent. Modern life is rubbish.

        • Bobby Turkalino
          link
          fedilink
          English
          23 months ago

          Normie, layman… as you’ve pointed out, it’s difficult to use these words without sounding condescending (which I didn’t mean to be). The media using words like “hallucinate” to describe linear algebra is necessary because most people just don’t know enough math to understand the fundamentals of deep learning - which is completely fine, people can’t know everything and everyone has their own specialties. But any time you simplify science so that it can be digestible by the masses, you lose critical information in the process, which can sometimes be harmfully misleading.

          • @Krauerking@lemy.lol
            link
            fedilink
            English
            163 months ago

            Or sometimes the colloquial term people have picked up is a simplified tool for getting the right point across.

            Just because it’s guessing using math doesn’t mean it isn’t hallucinating in a sense the additional data. It did not exist before and it willed it into existence much like a hallucination while being easy for people to catch onto quickly as not trustworthy thanks to previous definitions and understanding of the word.

            Part of language is finding the right words to use so that people can quickly understand topics even if it means giving up nuance but absolutely it should be based on getting them to the right conclusion even if in a simplified form which doesn’t always happen when there is bias. I think this one works just fine.

          • @cucumberbob@programming.dev
            link
            fedilink
            English
            23 months ago

            It’s not just the media who uses this term. According to this study which I’ve had a very brief skim of, the term “hallucination” was used in literature as early as 2000, and in Table 1, you can see hundreds of studies from various databases which they then go on to analyse the use of “hallucination” in.

            It’s worth saying that this study is focused on showing how vague the term is, and how many different and conflicting definitions of “hallucination” there are in the literature, so I for sure agree it’s a confusing term. Just it is used by researchers as well as laypeople.

      • @abhibeckert@lemmy.world
        link
        fedilink
        English
        13
        edit-2
        3 months ago

        computers aren’t taking drugs and randomly pooping out images

        Sure, no drugs involved, but they are running a statistically proven random number generator and using that (along with non-random data) to generate the image.

        The result is this - ask for the same image, get two different images — similar, but clearly not the same person - sisters or cousins perhaps… but nowhere near usable as evidence in court:

        • @Gabu@lemmy.world
          link
          fedilink
          English
          23 months ago

          Tell me you don’t know shit about AI without telling me you don’t know shit. You can easily reproduce the exact same image by defining the starting seed and constraining the network to a specific sequence of operations.

          • @Natanael@slrpnk.net
            link
            fedilink
            English
            93 months ago

            But if you don’t do that then the ML engine doesn’t have the introspective capability to realize it failed to recreate an image

            • @Gabu@lemmy.world
              link
              fedilink
              English
              23 months ago

              And if you take your eyes off of their sockets you can no longer see. That’s a meaningless statement.

              • @blind3rdeye@lemm.ee
                link
                fedilink
                English
                33 months ago

                The point is that the AI ‘enhanced’ photos have nice clear details that are randomly produced, and thus should not be relied on. Are you suggesting that we can work around that problem by choosing a random seed manually? Do you think that solves the problem?

      • @Malfeasant@lemmy.world
        link
        fedilink
        English
        113 months ago

        computers can’t do anything truly random.

        Technically incorrect - computers can be supplied with sources of entropy, so while it’s true that they will produce the same output given identical inputs, it is in practice quite possible to ensure that they do not receive identical inputs if you don’t want them to.

      • @Kedly@lemm.ee
        link
        fedilink
        English
        83 months ago

        Bud, hallucinate is a perfect term for the shit AI creates because it doesnt understand reality, regardless if math is creating that hallucination or not