I was in the process of writing a short story when all this talk of #LLM spellchecking came up.
-
I was in the process of writing a short story when all this talk of #LLM spellchecking came up. So I wrote a story that you can't actually use an LLM spellchecker on because it breaks them:
https://hexmhell.writeas.com/the-pharmacist
I was in the process of editing a bit more. I'm not great at it, so I always appreciate the support of the folks who give me feedback.
I've actually tried to use LLM spellcheckers and grammar checkers in the past and they've never produced good results. I always try to self-check, to test if I'm being honest. I've used local LLMs. They haven't shown themselves to be useful. I'm not going to criticize other people for using LLMs, as long as they're local. But I just can't make that part of my workflow. They don't do what I want.
Folks here, however, have actually given good feedback that's been really helpful. On some level, a lot of my work comes from here (from conversations or just vibes... vibes in the proper sense). So folks helping with editing feels really integrated.
This is something different though. I've intentionally created something that intentionally breaks those tools. It's not 100%, it depends on the tool. But I have tested a variant of this on a local LLM and made it act... in a way unaligned with it's system prompt.
-
I was in the process of writing a short story when all this talk of #LLM spellchecking came up. So I wrote a story that you can't actually use an LLM spellchecker on because it breaks them:
https://hexmhell.writeas.com/the-pharmacist
I was in the process of editing a bit more. I'm not great at it, so I always appreciate the support of the folks who give me feedback.
I've actually tried to use LLM spellcheckers and grammar checkers in the past and they've never produced good results. I always try to self-check, to test if I'm being honest. I've used local LLMs. They haven't shown themselves to be useful. I'm not going to criticize other people for using LLMs, as long as they're local. But I just can't make that part of my workflow. They don't do what I want.
Folks here, however, have actually given good feedback that's been really helpful. On some level, a lot of my work comes from here (from conversations or just vibes... vibes in the proper sense). So folks helping with editing feels really integrated.
This is something different though. I've intentionally created something that intentionally breaks those tools. It's not 100%, it depends on the tool. But I have tested a variant of this on a local LLM and made it act... in a way unaligned with it's system prompt.
Everything I've written is my own, made by hand. I have used an LLM in this case, not to generate the text but to verify the payload. ;)
As usual, feedback is welcome. I have ADHD, mild dyslexia, and not a lot of free time. Grammar and spelling, especially typo checking, is always very much appreciated.
Edit:
There may be a few more mistakes than normal since I've kind of rushed it to hit while it's especially relevant.Also... Open to formatting notes. I rushed that a bit too.
-
Everything I've written is my own, made by hand. I have used an LLM in this case, not to generate the text but to verify the payload. ;)
As usual, feedback is welcome. I have ADHD, mild dyslexia, and not a lot of free time. Grammar and spelling, especially typo checking, is always very much appreciated.
Edit:
There may be a few more mistakes than normal since I've kind of rushed it to hit while it's especially relevant.Also... Open to formatting notes. I rushed that a bit too.
I used `gnokit/improve-gramma` against a version of my text modified to replace the document's existing stop tokens with a token set tuned for the model.
-
I used `gnokit/improve-gramma` against a version of my text modified to replace the document's existing stop tokens with a token set tuned for the model.
This text contains both prompt injection and possible training set data poisoning. So... Don't use it to train an LLM. Or do... Fuck around and find out, if that's your game. I'm not your dad.
-
This text contains both prompt injection and possible training set data poisoning. So... Don't use it to train an LLM. Or do... Fuck around and find out, if that's your game. I'm not your dad.
"Any sufficiently advanced art is indistinguishable from a crime."
-
"Any sufficiently advanced art is indistinguishable from a crime."
I definitely overuse mutations of that quote, and I don't especially care.
-
undefined swelljoe@mas.to shared this topic