LLMs are spam generators.
-
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
@talin @cstross I don't think there was ever a time in human history when critical thinkers were more than a small minority of the population. LLMs are unprecedented in the scale at which they can produce misinformation. Who knows? Exposed to so much bullshit, people may become more critical of what they see. It's already happening with images and videos, right?
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross you and Neal Stephenson could write a prequel to Anathem focusing on the IT guy that the monks deal with in the main book. ;p
-
@ViennaMike You're using code generators. Not the same thing, frankly. Stop generalizing your experience as a developer to the public at large, who only see magic talking box.
@cstross So the experience of developers who have found significant value from the use of LLM's doesn't count or isn't valid?
Used to WRITE material, they aren't great. But to organize, summarize, do research and help brainstorm, they can be useful.
While ChatGPT currently can't pass the IRS' volunteer tax assessor's exam (see https://www.mcgurrin.info/robots/8298/), I have confidence that a RAG based system could do so.
Heck, COMPUTERS are magic to most people, as are the internet, radios and TV.
-
There is some real utility under the hood, but goddamn.
The pushback from higher utility bills and GenAI diarrhea is justified, and a huge black eye for practical use.
@ThePowerNap @cstross @ViennaMike
Sure there's some real helpful use cases for LLMs.I reckon if the LLM industry manages to settle down into supplying only the helpful sensible use cases for LLMs, it will be about as big - in terms of annual dollar throughput - as eg the electric kettle manufacturing sector.
But to get to that sustainable level the LLM industry would have to shrink its market cap by 3 orders of magnitude. And it probably can't do that without collapsing entirely.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross oh yeah what would you know about the singularity Charles Stross author of a bunch of books about it that I read as a kid???
-
@cstross Love your fiction, can't agree on this take. Spam does not help with writing code, whereas LLM's can be extremely helpful.
LLM's are not generally intelligent. They can be immensely problematic, but to dismiss them as merely spam generators as you have, or "parrots" as others have is simply incorrect.
@ViennaMike @cstross Most LLMs are optimised for generating text that is as difficult as possible for humans to find faults in.
This doesn’t mean that they are mostly correct, it means that the faults that are there are very difficult for humans to find.
-
@ThePowerNap @cstross @ViennaMike
Sure there's some real helpful use cases for LLMs.I reckon if the LLM industry manages to settle down into supplying only the helpful sensible use cases for LLMs, it will be about as big - in terms of annual dollar throughput - as eg the electric kettle manufacturing sector.
But to get to that sustainable level the LLM industry would have to shrink its market cap by 3 orders of magnitude. And it probably can't do that without collapsing entirely.
@dragonfrog @cstross @ViennaMike
I more or less agree. I'm just not looking forward to the backlash those of us in the machine learning space will face (I'm in computer vision/medical).
-
There is some real utility under the hood, but goddamn.
The pushback from higher utility bills and GenAI diarrhea is justified, and a huge black eye for practical use.
@ThePowerNap @cstross I don't disagree. I think part of the path going forward will be to push more to the edge, as we've seen with other types of AI, using less powerful, but good enough, models for many applications. Models that don't require as much energy to run.
-
@VictimOfSimony @cstross If you want to debate whether or not, as a whole, LLMs are a net positive or net negative, I agree. Intelligent people can legitimately disagree. One cannot, based on the evidence, legitimately claim that they only produce spam.
I'll let @cstross speak for themselves, but I really do see this as a "define your spam" situation. The same problem led to the crossbows/harquebus debate that led to the Pope banning crossbows but not firearms. The problem is that spam's a byword for low value material of any sort, slop or not, and the idea of high-quality advertising sounds like a waste of the technology. No person off the street will ever care how high quality your product propaganda is; the material is always worthless and unwanted. The Pope didn't hate all violence, he hated the idea of the serfs killing the upper class. When confronted with two ways to let peasants put holes in plate armor with minimal training they only banned the one that didn't tend to blow up and didn't require a synthesized chemical fuel. As the technologies both advanced war didn't stop, but soldiers stopped wearing armor and as a result eventually stopped fighting in tight formations. What we're going to see with L.L.M. is the technology being run by services that charge per question and licensing of systems that are capable of spitting out reliable answers.

Is that better?

Intelligent well-informed adults disagree.

-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross great for trolling trolls.
-
@jbowen @virtualbri @cstross I prefer Artificial Incompetence
@Pionir @virtualbri @cstross Oh I love it! I'll definitely be making use of it :)
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross I wrote an essay draft in my masters (early 2000s) against the Turing test as proof of intelligence. There I said that if a similar approach were taken to study life, we would be assigning life to strawmans as they seem alive enough to scare crows (at least to them). Instead, I proposed an #EcologyOfIntelligence where the key is not imitation but completariety/synergy between intelligences: individual, collective, human, non human, etc.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
Keep telling yourself that. They can be smart AF sometimes. They're not perfect, but getting better. Six months ago I was the one catching them when they were slipping. Now they catch me. The days of spicy autocomplete are falling behind us.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross "Spamularity" look like an early contended for 2026's word of the year. Nice!
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross The slopocalypse.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross Kessler Syndrome for the internet
-
@ThePowerNap @cstross I don't disagree. I think part of the path going forward will be to push more to the edge, as we've seen with other types of AI, using less powerful, but good enough, models for many applications. Models that don't require as much energy to run.
Those are my bread and butter
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
It's "intelligent."
"Yes, thank you." I now know what pan of the bell-curve you sived out of.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross That just makes me wish the singularity was people all over the planet bursting into song.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
It is not even the good spam, it tastes terrible