- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
I’m more of a semicolon enjoyer myself.
Personally, I’m more of a colon semi-enjoyer.
I have Crohns and hate my colon as much as it hates me
Then you should try half-assing it, Crohns isn’t semi enough
I’m really into periods.
I do not miss periods.
Wait, we were talking about punctuation, weren’t we?
I load my commas into a 10 gauge shotgun and fire them at the page.
Try the interrobang‽
They serve different functions; they need not compete for your love.
They serve different functions — they need not compete for your love.
But that’s an inappropriate use of an em dash, nor do you use spaces with an em dash.
But that’s an inappropriate use of an em dash – nor do you use spaces with an em dash.
Me; too.
I’m confused, show us on the doll where the text book fingered you
I still double space after a period, because fuck you, it is easier to read. But as a bonus, it helped me prove that something I wrote wasn’t AI. You literally cannot get an AI to add double spaces after a period. It will say “Yeah, OK, I can do that” and then spit out a paragraph without it. Give it a try, it’s pretty funny.
So… Why don’t I see double spaces after your periods? Test. For. Double. Spaces.
EDIT: Yep, double spaces were removed from my test. So, that’s why. Although, they are still there as I’m editing this. So, not removed, just hidden, I guess?
I still double space after a period, because fuck you, it is easier to read. But as a bonus, it helped me prove that something I wrote wasn’t AI. You literally cannot get an AI to add double spaces after a period. It will say “Yeah, OK, I can do that” and then spit out a paragraph without it. Give it a try, it’s pretty funny.
Web browsers collapse whitespace by default which means that sans any trickery or deliberately using nonbreaking spaces, any amount of spaces between words to be reduced into one. Since apparently every single thing in the modern world is displayed via some kind of encapsulated little browser engine nowadays, the majority of double spaces left in the universe that are not already firmly nailed down into print now appear as singles. And thus the convention is almost totally lost.
This seems to match up with some quick tests I did just now, on the pseudonyminized chatbot interface of duckduckgo.
chatgpt, llama, and claude all managed to use double spaces themselves, and all but llama managed to tell I was using them too.
It might well depend on the platform, with the “native” applications for them stripping them on both ends.tests



Mistral seems a bit confused and uses tripple-spaces.

Tokenization can make it difficult for them.

The word chunks often contain a space because it’s efficient. I would think an extra space would stand out. Writing it back should be easier, assuming there is a dedicated “space” token like other punctuation tokens, there must be.
Hard mode would be asking it how many spaces there are in your sentence. I don’t think they’d figure it out unless their own list of tokens and a description is trained into them specifically.
Markdown usually collapses double spaces, yeah. But you can force the double spaces. Like this.
Double spaces after periods can create “rivers.” This makes text more difficult to read for those with dyslexia. Whatever is used as a text editor is probably stripping them out for accessibility reasons. I suppose double spaces made sense with monospaced fonts.
https://apastyle.apa.org/style-grammar-guidelines/paper-format/accessibility/typography#myth4
HTML rendering collapses whitespace; it has nothing to do with accessibility. I would like to see the research on double-spacing causing rivers, because I’ve only ever noticed them in justified text where I would expect the renderer to be inserting extra space after a full stop compared between words within sentence anyway.
I’ve seen a lot of dubious legibility claims when it comes to typography including:
- serif is more legible
- sans-serif is more legible
- comic sans is more legible for people with dyslexia
and so on.
This is because spaces typically are encoded by model tokenizers.
In many cases it would be redundant to show spaces, so tokenizers collapse them down to no spaces at all. Instead the model reads tokens as if the spaces never existed.
For example it might output: thequickbrownfoxjumpsoverthelazydog
Except it would actually be a list of numbers like: [1, 256, 6273, 7836, 1922, 2244, 3245, 256, 6734, 1176, 2]
Then the tokenizer decodes this and adds the spaces because they are assumed to be there. The tokenizer has no knowledge of your request, and the model output typically does not include spaces, hence your output sentence will not have double spaces.
I’d expect tokenizers to include spaces in tokens. You get words constructed from multiple tokens, so can’t really insert spaces based on them. And too much information doesn’t work well when spaces are stripped.
In my tests plenty of llms are also capable of seeing and using double spaces when accessed with the right interface.
The tokenizer is capable of decoding spaceless tokens into compound words following a set of rules referred to as a grammar in Natural Language Processing (NLP). I do LLM research and have spent an uncomfortable amount of time staring at the encoded outputs of most tokenizers when debugging. Normally spaces are not included.
There is of course a token for spaces in special circumstances, but I don’t know exactly how each tokenizer implements those spaces. So it does make sense that some models would be capable of the behavior you find in your tests, but that appears to be an emergent behavior, which is very interesting to see it work successfully.
I intended for my original comment to convey the idea that it’s not surprising that LLMs might fail at following the instructions to include spaces since it normally doesn’t see spaces except in special circumstances. Similar to how it’s unsurprising that LLMs are bad at numerical operations because of how the use Markov Chain probability to each next token, one at a time.
Yeah, I would expect it to be hard, similar to asking an llm to substitiute all letters e with an a. Which I’m sure they struggle with but manage to perform it too.
In this context though it’s a bit misleading explaining the observed behavior of op with that though, since it implies it is due to that fundamental nature of llms when in practice all models I have tested fundamentally had the ability.
It does seem that llms simply don’t use double spaces (or I have not noticed them doing it anywhere yet), but if you trained or just systemprompted them differently they could easily start to. So it isn’t a very stable method for non-ai identification.
Edit: And of course you’d have to make sure the interfaces also don’t strip double spaces, as was guessed elsewhere. I have not checked other interfaces but would not be surprised either way whether they did or did not. This too thought can’t be overly hard to fix with a few select character conversions even in the worst cases. And clearly at least my interface already managed to do it just fine.
LLMs can’t count because they’re not brains. Their output is the statistically most-likely next character, and since lot electronic text wasn’t double-spaced after a period, it can’t “follow” that instruction.
Seriously, I was em dashing on a goddamn typewriter, the fuck am I gonna change it now.
In the end, it won’t matter. Being able to write well will be like riding a horse, calligraphy or tuning a carburetor. They will all become hobbies, a quirky past time of rich people or niche enthusiasts with limited real-world use.
Maybe it is for the best. Most people can’t write for shit (does not help that we often use our goddamn thumbs to do most of it) and we spend countless hours in school trying to get kids to learn.
Science fiction has us just projecting our thoughts to other without the clumsiness of language as the medium. Maybe this is just the first step.
fuck whoever said that — em dases for the win
forr this is a lifeless machine the one parroting me and the others, not the other way around. Em dashes are cool.
Hell yeah to em dashes!
I will never stop using them. Fuck AI. I won’t let it take the joy of nice, legible formatting away from me.
Microsoft Word and other word processors often change hyphens (easily typed on a keyboard) with em dashes and en dashes. It’s in the AutoCorrect settings.
So, ironically, it was our “use” of them over a long period of time that got LLMs to be so hyped on them
I don’t know that LLMs are ingesting all that many word documents; they probably got the em dashes from published books
Downloaded from pirate sources!
Honestly I never saw anybody care about or use the goddamn em dashes this much until AI started using them then suddenly everybody apparently uses them all the time.
Like come on, no you don’t.
Same thing goes for triple dot as a single character.
I think people just don’t like being told what to do. Like, there are a lot of behaviors you can trace back to someone just being personally aggrieved that they ought to change anything.
That said, if anyone else is reading, the em dash is a clue that you use to diagnose with—you don’t have to stop using it.
This shit drove me wild when I was using ChatGPT more frequently. It’d be like “do you want me to re-phrase that in your voice?” and then type some shit out that I’d never say in my damn life. The dashes were the worst part
So you are in fact the opposite of this meme.
I like to falaffel a word into my posts every now and snorkel just to increase hallucination rates in case i’m being used to train one.
It’s hard to win because it might just catch on and then bam everyone’s doing it including the AI and that’s just how we talk now
Ive been trying my hand at writing for a number of years, and Ive been using em dahes because I saw the writers I read using them. Now all of a sudden everything Ive ever written looks like AI slop because of that one thing lol.
I used them a lot in college. Glad I graduated in 22 right before AI took over.
ChatGPT is a no talent assclown
And as a long time en dash afficienado, I’d be instantly exposed by those lesser em dashes appearing in my communications.
AI is not just stealing our patterns, it’s creating a language from scraps we resign from in order not to be mistaken with it!
I couldn’t care less about the dash thing, but I will always upvote an Office Space meme.











