LLMs are spam generators.
-
@ViennaMike You're using code generators. Not the same thing, frankly. Stop generalizing your experience as a developer to the public at large, who only see magic talking box.
If you like, put another way, you're taking the "good guy with a gun" side of the argument.

I mean, feel free, but own it. Intelligent well-informed adults frequently take opposite sides in this.

-
@cstross All lines converge toward a single spamishing point.
@cstross One of the execs at $CURRENT_OVERLORD wrote a blog post about passing the Turing Test. He raved about the AI that could fool him. But that famous feat has long ago ceased to measure the capabilities of a computer program, and instead started to measure the limitations of human perceptions.
When the current gen of chat bots can fool you, it isn't because they're matching levels of human capabilities now. They're exploiting levels of human weaknesses.
-
For decades #Turing test was the ironclad determinant for #AI quality.
Once it was trivially broken by now old models, the goalposts have shifted......it's now humanities last exam, HLE, BTW.
The world is full of spam generating humans.
One is in the Whitest house, putin is surrounded by them, and any large family gathering will contain 2-3 human spam generators the will jibber-jabber nonsensical human-like speech constantly.I'm told Turing also thought 30% convincing was enough to be noteworthy, presumably because some humans barely pass that.

-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross #AI will soon be understood to stand for #ArtificialIdiots.
Or #clankers.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
Yet Saudi Arabia alone is laundering $100 billion of its looted national treasury on spam generation?
Perhaps 10% is being used on the spam-making software, the rest is being spent on a fossil fuel funded fascist movement, complete with civil rights erosions, state surveillance platforms, goon squads, concentration camps, and financial fraud.
We continue to underestimate how badly the fossil fuel industry wants permanent rule.
https://www.bloomberg.com/news/articles/2025-09-16/ai-deals-saudi-arabia-eyes-artificial-intelligence-partnership-with-pe-firmshttps://www.semafor.com/article/11/07/2025/uae-says-its-invested-148b-in-ai-since-2024
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross
My name is ham, for I am spam.
Would you like....wait, what's that thing you're pointing at.my head Mr. Engineer? -
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross : Sentence also work by replacing LLMs by "marketers" or "people holding a MBA"
-
@cstross @ViennaMike Rubber ducks are useful when writing code, too. Doesn't make them intelligent.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross Spam generators without culture.
-
@jbowen @virtualbri @cstross I prefer Artificial Incompetence
-
Another thing that Douglas Adams invented but we don't actually want (along with GPP : Genuine People Personalities™)
@dr_barnowl @talin @cstross That was a quite confusing thing to read, knowing about the real WayForward Technologies, a video game company best known for the Shantae series. I wonder if the real company was named after the fictional one? They both seem to have had founders named Way.
-
If you like, put another way, you're taking the "good guy with a gun" side of the argument.

I mean, feel free, but own it. Intelligent well-informed adults frequently take opposite sides in this.

@VictimOfSimony @cstross If you want to debate whether or not, as a whole, LLMs are a net positive or net negative, I agree. Intelligent people can legitimately disagree. One cannot, based on the evidence, legitimately claim that they only produce spam.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross LLMs are in some way a reverse-turing-test - one we're frequently failing.
-
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
@talin @cstross I don't think there was ever a time in human history when critical thinkers were more than a small minority of the population. LLMs are unprecedented in the scale at which they can produce misinformation. Who knows? Exposed to so much bullshit, people may become more critical of what they see. It's already happening with images and videos, right?
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross you and Neal Stephenson could write a prequel to Anathem focusing on the IT guy that the monks deal with in the main book. ;p
-
@ViennaMike You're using code generators. Not the same thing, frankly. Stop generalizing your experience as a developer to the public at large, who only see magic talking box.
@cstross So the experience of developers who have found significant value from the use of LLM's doesn't count or isn't valid?
Used to WRITE material, they aren't great. But to organize, summarize, do research and help brainstorm, they can be useful.
While ChatGPT currently can't pass the IRS' volunteer tax assessor's exam (see https://www.mcgurrin.info/robots/8298/), I have confidence that a RAG based system could do so.
Heck, COMPUTERS are magic to most people, as are the internet, radios and TV.
-
There is some real utility under the hood, but goddamn.
The pushback from higher utility bills and GenAI diarrhea is justified, and a huge black eye for practical use.
@ThePowerNap @cstross @ViennaMike
Sure there's some real helpful use cases for LLMs.I reckon if the LLM industry manages to settle down into supplying only the helpful sensible use cases for LLMs, it will be about as big - in terms of annual dollar throughput - as eg the electric kettle manufacturing sector.
But to get to that sustainable level the LLM industry would have to shrink its market cap by 3 orders of magnitude. And it probably can't do that without collapsing entirely.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross oh yeah what would you know about the singularity Charles Stross author of a bunch of books about it that I read as a kid???
-
@cstross Love your fiction, can't agree on this take. Spam does not help with writing code, whereas LLM's can be extremely helpful.
LLM's are not generally intelligent. They can be immensely problematic, but to dismiss them as merely spam generators as you have, or "parrots" as others have is simply incorrect.
@ViennaMike @cstross Most LLMs are optimised for generating text that is as difficult as possible for humans to find faults in.
This doesn’t mean that they are mostly correct, it means that the faults that are there are very difficult for humans to find.
-
@ThePowerNap @cstross @ViennaMike
Sure there's some real helpful use cases for LLMs.I reckon if the LLM industry manages to settle down into supplying only the helpful sensible use cases for LLMs, it will be about as big - in terms of annual dollar throughput - as eg the electric kettle manufacturing sector.
But to get to that sustainable level the LLM industry would have to shrink its market cap by 3 orders of magnitude. And it probably can't do that without collapsing entirely.
@dragonfrog @cstross @ViennaMike
I more or less agree. I'm just not looking forward to the backlash those of us in the machine learning space will face (I'm in computer vision/medical).