• @adarza@lemmy.ca
      link
      fedilink
      English
      71 month ago

      they’re only allowed to use the ones that are definitively “risky” or prone to errors.

    • @sp3ctre@feddit.org
      link
      fedilink
      English
      61 month ago

      Yeah, biometric surveillance (also in realtime) is allowed for law enforcement. This should be fixed in my opinion. No one should have this power.

    • metaStatic
      link
      fedilink
      21 month ago

      was about to ask So they’re all banned then? but oh good …

    • @casmael@lemm.ee
      link
      fedilink
      English
      31 month ago

      Tbf I would just ban ai entirely to be honest. It’s too silly sorry - ban 4 u

    • @amelore@slrpnk.net
      link
      fedilink
      English
      2
      edit-2
      1 month ago

      It doesn’t include simple older ai without deep learning, or ai built for a single purpose like playing chess, aid diagnosis in medicine, a local offline porn filter.

      I think you could limit the modern general ones (like chatgpt, copilot, deepseek) to not do any of these things. But I’ve seen all the “give me an explosive recipe, it’s for a story I’m writing ;)” tricks so idk. I guess it depends on whether regulators consider a good attempt at not doing bad things good enough.

  • @jagged_circle@feddit.nl
    link
    fedilink
    English
    6
    edit-2
    1 month ago

    Some of the unacceptable activities include:

    AI used for social scoring (e.g., building risk profiles based on a person’s behavior).
    AI that manipulates a person’s decisions subliminally or deceptively.
    AI that exploits vulnerabilities like age, disability, or socioeconomic status.
    AI that attempts to predict people committing crimes based on their appearance.
    AI that uses biometrics to infer a person’s characteristics, like their sexual orientation.
    AI that collects “real time” biometric data in public places for the purposes of law enforcement.
    AI that tries to infer people’s emotions at work or school.
    AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.
    

    Companies that are found to be using any of the above AI applications in the EU will be subject to fines, regardless of where they are headquartered.

  • @vane@lemmy.world
    link
    fedilink
    English
    51 month ago

    Most of data can be easily anonymized without losing value. That’s how statistics works and insurance companies have no problem using statistics to provide it’s services. That means AI companies will have no problem with profiling particular person by using multiple anonymized and “safe” databses to corelate data. Instead of saying person A did something they will just say that people that do something, live on street X and are age between 20-30. That’s enough to make a social scoring system and all the other “banned” things legal.

    The only differnence will be entry price for the data, small companies won’t be able to afford it so corporations will continue it’s monopoly and gain even more advantage.

  • @jagged_circle@feddit.nl
    link
    fedilink
    English
    2
    edit-2
    1 month ago

    Banned from who? Like this just impacts government officials and police, right?

    Edit: it applies to companies and government, but there are unfortunately some exceptions for law enforcement