• @BlueMonday1984
    link
    English
    1121 days ago

    I do feel like active anti-scraping measures could go somewhat further, though - the obvious route in my eyes would be to try to actively feed complete garbage to scrapers instead - whether by sticking a bunch of garbage on webpages to mislead scrapers or by trying to prompt inject the shit out of the AIs themselves.

    Me, predicting how anti-scraping efforts would evolve

    (I have nothing more to add, I just find this whole development pretty vindicating)

  • @rook
    link
    English
    621 days ago

    Additionally, https://xeiaso.net/blog/2025/anubis/

    Some of this stuff could be conceivably implemented as an easy-to-consume service. It would be nice if it were possible to fend off the scrapers without needing to be a sysadmin or, say, a cloudflare customer.

    (Whilst I could be either of those things, unless someone is paying me I would very much rather not)

  • arsCynic
    link
    fedilink
    English
    621 days ago

    Stupidly trivial question probably, but I guess it isn’t possible to poison LLMs on static websites hosted on GitHub?

    • Luna
      link
      fedilink
      English
      821 days ago

      You can make a page filled with gibberish and have a display: none honeypot link to it inside your other pages. Not sure how effective would that be though

    • -dsr-
      link
      English
      621 days ago

      Sure, but then you have to generate all that crap and store it with them. Preumably Github will eventually decide that you are wasting their space and bandwidth and… no, never mind, they’re Microsoft now. Competence isn’t in their vocabulary.