• themeatbridge@lemmy.world
    link
    fedilink
    arrow-up
    28
    ·
    2 months ago

    I still double space after a period, because fuck you, it is easier to read. But as a bonus, it helped me prove that something I wrote wasn’t AI. You literally cannot get an AI to add double spaces after a period. It will say “Yeah, OK, I can do that” and then spit out a paragraph without it. Give it a try, it’s pretty funny.

    • TrackinDaKraken@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      2 months ago

      So… Why don’t I see double spaces after your periods? Test. For. Double. Spaces.

      EDIT: Yep, double spaces were removed from my test. So, that’s why. Although, they are still there as I’m editing this. So, not removed, just hidden, I guess?

      I still double space after a period, because fuck you, it is easier to read. But as a bonus, it helped me prove that something I wrote wasn’t AI. You literally cannot get an AI to add double spaces after a period. It will say “Yeah, OK, I can do that” and then spit out a paragraph without it. Give it a try, it’s pretty funny.

      • dual_sport_dork 🐧🗡️@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        edit-2
        2 months ago

        Web browsers collapse whitespace by default which means that sans any trickery or   deliberately   using    nonbreaking    spaces,   any amount of spaces between words to be reduced into one. Since apparently every single thing in the modern world is displayed via some kind of encapsulated little browser engine nowadays, the majority of double spaces left in the universe that are not already firmly nailed down into print now appear as singles. And thus the convention is almost totally lost.

        • redjard@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          2 months ago

          This seems to match up with some quick tests I did just now, on the pseudonyminized chatbot interface of duckduckgo.
          chatgpt, llama, and claude all managed to use double spaces themselves, and all but llama managed to tell I was using them too.
          It might well depend on the platform, with the “native” applications for them stripping them on both ends.

          tests

          Mistral seems a bit confused and uses tripple-spaces.

          • SGforce@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            2 months ago

            Tokenization can make it difficult for them.

            The word chunks often contain a space because it’s efficient. I would think an extra space would stand out. Writing it back should be easier, assuming there is a dedicated “space” token like other punctuation tokens, there must be.

            Hard mode would be asking it how many spaces there are in your sentence. I don’t think they’d figure it out unless their own list of tokens and a description is trained into them specifically.

        • FishFace@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          2 months ago

          HTML rendering collapses whitespace; it has nothing to do with accessibility. I would like to see the research on double-spacing causing rivers, because I’ve only ever noticed them in justified text where I would expect the renderer to be inserting extra space after a full stop compared between words within sentence anyway.

          I’ve seen a lot of dubious legibility claims when it comes to typography including:

          1. serif is more legible
          2. sans-serif is more legible
          3. comic sans is more legible for people with dyslexia

          and so on.

    • CodeInvasion@sh.itjust.works
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      2 months ago

      This is because spaces typically are encoded by model tokenizers.

      In many cases it would be redundant to show spaces, so tokenizers collapse them down to no spaces at all. Instead the model reads tokens as if the spaces never existed.

      For example it might output: thequickbrownfoxjumpsoverthelazydog

      Except it would actually be a list of numbers like: [1, 256, 6273, 7836, 1922, 2244, 3245, 256, 6734, 1176, 2]

      Then the tokenizer decodes this and adds the spaces because they are assumed to be there. The tokenizer has no knowledge of your request, and the model output typically does not include spaces, hence your output sentence will not have double spaces.

      • redjard@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        I’d expect tokenizers to include spaces in tokens. You get words constructed from multiple tokens, so can’t really insert spaces based on them. And too much information doesn’t work well when spaces are stripped.

        In my tests plenty of llms are also capable of seeing and using double spaces when accessed with the right interface.

        • CodeInvasion@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          2 months ago

          The tokenizer is capable of decoding spaceless tokens into compound words following a set of rules referred to as a grammar in Natural Language Processing (NLP). I do LLM research and have spent an uncomfortable amount of time staring at the encoded outputs of most tokenizers when debugging. Normally spaces are not included.

          There is of course a token for spaces in special circumstances, but I don’t know exactly how each tokenizer implements those spaces. So it does make sense that some models would be capable of the behavior you find in your tests, but that appears to be an emergent behavior, which is very interesting to see it work successfully.

          I intended for my original comment to convey the idea that it’s not surprising that LLMs might fail at following the instructions to include spaces since it normally doesn’t see spaces except in special circumstances. Similar to how it’s unsurprising that LLMs are bad at numerical operations because of how the use Markov Chain probability to each next token, one at a time.

          • redjard@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            2 months ago

            Yeah, I would expect it to be hard, similar to asking an llm to substitiute all letters e with an a. Which I’m sure they struggle with but manage to perform it too.

            In this context though it’s a bit misleading explaining the observed behavior of op with that though, since it implies it is due to that fundamental nature of llms when in practice all models I have tested fundamentally had the ability.

            It does seem that llms simply don’t use double spaces (or I have not noticed them doing it anywhere yet), but if you trained or just systemprompted them differently they could easily start to. So it isn’t a very stable method for non-ai identification.

            Edit: And of course you’d have to make sure the interfaces also don’t strip double spaces, as was guessed elsewhere. I have not checked other interfaces but would not be surprised either way whether they did or did not. This too thought can’t be overly hard to fix with a few select character conversions even in the worst cases. And clearly at least my interface already managed to do it just fine.

    • 4am@lemmy.zip
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      2 months ago

      LLMs can’t count because they’re not brains. Their output is the statistically most-likely next character, and since lot electronic text wasn’t double-spaced after a period, it can’t “follow” that instruction.

  • blargh513@sh.itjust.works
    link
    fedilink
    arrow-up
    20
    ·
    2 months ago

    Seriously, I was em dashing on a goddamn typewriter, the fuck am I gonna change it now.

    In the end, it won’t matter. Being able to write well will be like riding a horse, calligraphy or tuning a carburetor. They will all become hobbies, a quirky past time of rich people or niche enthusiasts with limited real-world use.

    Maybe it is for the best. Most people can’t write for shit (does not help that we often use our goddamn thumbs to do most of it) and we spend countless hours in school trying to get kids to learn.

    Science fiction has us just projecting our thoughts to other without the clumsiness of language as the medium. Maybe this is just the first step.

  • CheesyFox@lemmy.sdf.org
    link
    fedilink
    arrow-up
    16
    ·
    2 months ago

    fuck whoever said that — em dases for the win

    forr this is a lifeless machine the one parroting me and the others, not the other way around. Em dashes are cool.

    Hell yeah to em dashes!

  • 4am@lemmy.zip
    link
    fedilink
    arrow-up
    11
    ·
    2 months ago

    Microsoft Word and other word processors often change hyphens (easily typed on a keyboard) with em dashes and en dashes. It’s in the AutoCorrect settings.

    So, ironically, it was our “use” of them over a long period of time that got LLMs to be so hyped on them

  • Tigeroovy@lemmy.ca
    link
    fedilink
    arrow-up
    10
    ·
    2 months ago

    Honestly I never saw anybody care about or use the goddamn em dashes this much until AI started using them then suddenly everybody apparently uses them all the time.

    Like come on, no you don’t.

    • petrol_sniff_king@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      I think people just don’t like being told what to do. Like, there are a lot of behaviors you can trace back to someone just being personally aggrieved that they ought to change anything.

      That said, if anyone else is reading, the em dash is a clue that you use to diagnose with—you don’t have to stop using it.

  • Thatuserguy@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    2 months ago

    This shit drove me wild when I was using ChatGPT more frequently. It’d be like “do you want me to re-phrase that in your voice?” and then type some shit out that I’d never say in my damn life. The dashes were the worst part

  • ShittDickk@lemmy.world
    link
    fedilink
    arrow-up
    10
    ·
    2 months ago

    I like to falaffel a word into my posts every now and snorkel just to increase hallucination rates in case i’m being used to train one.

    • Aeri@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      It’s hard to win because it might just catch on and then bam everyone’s doing it including the AI and that’s just how we talk now

  • Bennyboybumberchums@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    2 months ago

    Ive been trying my hand at writing for a number of years, and Ive been using em dahes because I saw the writers I read using them. Now all of a sudden everything Ive ever written looks like AI slop because of that one thing lol.

  • Snapz@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    2 months ago

    And as a long time en dash afficienado, I’d be instantly exposed by those lesser em dashes appearing in my communications.

  • ddplf@szmer.info
    link
    fedilink
    arrow-up
    5
    ·
    2 months ago

    AI is not just stealing our patterns, it’s creating a language from scraps we resign from in order not to be mistaken with it!

  • oppy1984@lemdro.id
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    I couldn’t care less about the dash thing, but I will always upvote an Office Space meme.