LLMs are spam generators.
-
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
we already had that, with the so-called "gish gallop"; this is, however, a way to automate bad-faith horseshit at scale.
this requires new patterns by those of us trying to uphold reality; reactionary argumentation doesn't work when the other side is able to out-scale your ability to respond.
I've found that taking control of the conversation by saying their arguments are based on faulty premises and reaffirming some aspect of reality that undermines their class of arguments is reasonably successful.
-
@cstross Love your fiction, can't agree on this take. Spam does not help with writing code, whereas LLM's can be extremely helpful.
LLM's are not generally intelligent. They can be immensely problematic, but to dismiss them as merely spam generators as you have, or "parrots" as others have is simply incorrect.
@ViennaMike
They're not "intelligent" at all. They generate a probable series of tokens based on a training corpus.And they are used to generate a lot of spam code that maintainers now have to deal with.
If we weren't in the middle of one of the most hellacious hype cycles in history, LLMs would be neat, but they have become a real scourge and their damage across a multitude of categories far outweighs any utility they have.
They are spam generators. That is all.
-
@virtualbri @cstross
I'll take "techbro autocomplete!"(I have a soft spot for "overengineered" things which can withstand many thousands of times the abuse necessary for the job)
-
undefined oblomov@sociale.network shared this topic on
-
@virtualbri @cstross
I'll take "techbro autocomplete!"(I have a soft spot for "overengineered" things which can withstand many thousands of times the abuse necessary for the job)
-
@jbowen @virtualbri @cstross I, for one, welcome our stochastic parrots overlords ! (I do not)
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross a lot of people believe LLMs are intelligent because a lot of people are really thick. Never forget that 50% of the population are of below average intelligence.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
#NightmareOnLLMstreet is my favorite hashtag those days
-
@virtualbri @cstross
Like a tardigrade :) -
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
-
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross All lines converge toward a single spamishing point.
-
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
Another thing that Douglas Adams invented but we don't actually want (along with GPP : Genuine People Personalitiesโข)
-
@ViennaMike You're using code generators. Not the same thing, frankly. Stop generalizing your experience as a developer to the public at large, who only see magic talking box.
If you like, put another way, you're taking the "good guy with a gun" side of the argument.

I mean, feel free, but own it. Intelligent well-informed adults frequently take opposite sides in this.

-
@cstross All lines converge toward a single spamishing point.
@cstross One of the execs at $CURRENT_OVERLORD wrote a blog post about passing the Turing Test. He raved about the AI that could fool him. But that famous feat has long ago ceased to measure the capabilities of a computer program, and instead started to measure the limitations of human perceptions.
When the current gen of chat bots can fool you, it isn't because they're matching levels of human capabilities now. They're exploiting levels of human weaknesses.
-
For decades #Turing test was the ironclad determinant for #AI quality.
Once it was trivially broken by now old models, the goalposts have shifted......it's now humanities last exam, HLE, BTW.
The world is full of spam generating humans.
One is in the Whitest house, putin is surrounded by them, and any large family gathering will contain 2-3 human spam generators the will jibber-jabber nonsensical human-like speech constantly.I'm told Turing also thought 30% convincing was enough to be noteworthy, presumably because some humans barely pass that.

-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross #AI will soon be understood to stand for #ArtificialIdiots.
Or #clankers.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
Yet Saudi Arabia alone is laundering $100 billion of its looted national treasury on spam generation?
Perhaps 10% is being used on the spam-making software, the rest is being spent on a fossil fuel funded fascist movement, complete with civil rights erosions, state surveillance platforms, goon squads, concentration camps, and financial fraud.
We continue to underestimate how badly the fossil fuel industry wants permanent rule.
https://www.bloomberg.com/news/articles/2025-09-16/ai-deals-saudi-arabia-eyes-artificial-intelligence-partnership-with-pe-firmshttps://www.semafor.com/article/11/07/2025/uae-says-its-invested-148b-in-ai-since-2024
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross
My name is ham, for I am spam.
Would you like....wait, what's that thing you're pointing at.my head Mr. Engineer? -
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross : Sentence also work by replacing LLMs by "marketers" or "people holding a MBA"
-
@cstross @ViennaMike Rubber ducks are useful when writing code, too. Doesn't make them intelligent.