LLMs are spam generators.
-
@cstross @VictimOfSimony also i'm pretty sure from a couple of years of observation that actually, LLM code is spam too
@cstross @VictimOfSimony LLM code's real trick is to induce gambling addiction in its suckers, who will swear it works great just one more prompt bro https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/
-
@NefariousCelt But I don't NEED that! I've developed that skill for myself, the hard way!
A product where the target demographic is Kevin J Anderson.
-
@ThePowerNap @cstross @ViennaMike
Sure there's some real helpful use cases for LLMs.I reckon if the LLM industry manages to settle down into supplying only the helpful sensible use cases for LLMs, it will be about as big - in terms of annual dollar throughput - as eg the electric kettle manufacturing sector.
But to get to that sustainable level the LLM industry would have to shrink its market cap by 3 orders of magnitude. And it probably can't do that without collapsing entirely.
@dragonfrog @ThePowerNap @cstross @ViennaMike i have occasionally been using an AI tool called Phind. It was basically a “help with coding” tool. It recently rewrote itself completely into a generic AI tool… and then a month later closed.
I cant help but wonder… Is this the start of the great collapse?
-
@cstross Kessler Syndrome for the internet
-
A product where the target demographic is Kevin J Anderson.
@david_chisnall @cstross I think I get this reference. (Confused Captain America face) But I was leaning into how an author needs to be told how they do not understand his own oeuvre. Something AI would excel at.
-
@dragonfrog @cstross @ViennaMike
I more or less agree. I'm just not looking forward to the backlash those of us in the machine learning space will face (I'm in computer vision/medical).
@ThePowerNap @dragonfrog @cstross @ViennaMike The best purpose of a LLM system is to provide a natural language interface to a complex but not mission critical system such as home automation, media libraries and switchboards.
Machine learning on the other hand should lean into seeing without eyes (is a dramatic jump in intensity cancer?)
Both of these cases should be powered through a single IEC C13 socket and not some gW data centre.
-
@cstross LLMs always remind me of the protagonists of “Accelerando” finally making contact with a whole system of alien AIs, only to learn that they’re all just jumped-up versions of the Nigerian prince scam.
-
@cstross LLMs are replacing every aspect of civilization - institutions, process knowledge, organizational competence - with civilization fan-fiction.
@cstross Because so many people boosted my post, I want to explore this topic in more detail.
First, a thought experiment: suppose we had cheaply scalable mind control, even if partially effective (LLMs are not mind control but they touch some of the same wires). Democracy would end, since the controllers could sway elections at will. The so-called free market would instantly become a command economy. Science, and any other activity requiring independent thought, would be a dead letter.
-
@cstross Because so many people boosted my post, I want to explore this topic in more detail.
First, a thought experiment: suppose we had cheaply scalable mind control, even if partially effective (LLMs are not mind control but they touch some of the same wires). Democracy would end, since the controllers could sway elections at will. The so-called free market would instantly become a command economy. Science, and any other activity requiring independent thought, would be a dead letter.
@cstross In the book _Rainbows End_, Vinge proposes a form of mind control called "YGBM" technology, which stands for "You Gotta Believe Me", a way to get affected individuals to accept arbitrary propositions as truth.
What this has in common with LLMs is technologically enhanced plausibility.
-
@cstross In the book _Rainbows End_, Vinge proposes a form of mind control called "YGBM" technology, which stands for "You Gotta Believe Me", a way to get affected individuals to accept arbitrary propositions as truth.
What this has in common with LLMs is technologically enhanced plausibility.
@talin I was going to mention "Rainbows End"!
-
@cstross In the book _Rainbows End_, Vinge proposes a form of mind control called "YGBM" technology, which stands for "You Gotta Believe Me", a way to get affected individuals to accept arbitrary propositions as truth.
What this has in common with LLMs is technologically enhanced plausibility.
@cstross Now, we know from the earliest days of machine learning that algorithms are capable of exploiting any loophole in their fitness function with seemingly godlike competence.
Unfortunately, we can't make a fitness function for intelligence, truthfulness, or integrity. What we can do, at great expense, is make a fitness function for plausibility.
When an LLM goes through its reinforcement learning, it's behavior is rewarded based on whether some human reviewer believed the result.
-
@cstross Now, we know from the earliest days of machine learning that algorithms are capable of exploiting any loophole in their fitness function with seemingly godlike competence.
Unfortunately, we can't make a fitness function for intelligence, truthfulness, or integrity. What we can do, at great expense, is make a fitness function for plausibility.
When an LLM goes through its reinforcement learning, it's behavior is rewarded based on whether some human reviewer believed the result.
@cstross As LLMs improve, they will get better at chasing their fitness function, that is, at convincing people that what they say is true. This is what LLMs have in common with YGBM.
Those who know history know that the invention of cheap printing sparked of centuries of bloody religious conflict. In this regard, LLMs are like "hold my beer".
-
Yet Saudi Arabia alone is laundering $100 billion of its looted national treasury on spam generation?
Perhaps 10% is being used on the spam-making software, the rest is being spent on a fossil fuel funded fascist movement, complete with civil rights erosions, state surveillance platforms, goon squads, concentration camps, and financial fraud.
We continue to underestimate how badly the fossil fuel industry wants permanent rule.
https://www.bloomberg.com/news/articles/2025-09-16/ai-deals-saudi-arabia-eyes-artificial-intelligence-partnership-with-pe-firmshttps://www.semafor.com/article/11/07/2025/uae-says-its-invested-148b-in-ai-since-2024
@Npars01 Yep, 💯
-
Yet Saudi Arabia alone is laundering $100 billion of its looted national treasury on spam generation?
Perhaps 10% is being used on the spam-making software, the rest is being spent on a fossil fuel funded fascist movement, complete with civil rights erosions, state surveillance platforms, goon squads, concentration camps, and financial fraud.
We continue to underestimate how badly the fossil fuel industry wants permanent rule.
https://www.bloomberg.com/news/articles/2025-09-16/ai-deals-saudi-arabia-eyes-artificial-intelligence-partnership-with-pe-firmshttps://www.semafor.com/article/11/07/2025/uae-says-its-invested-148b-in-ai-since-2024
@Npars01 @cstross
Spam making software also assists the nazis too.
"The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist." ~ Hannah Arendt -
@cstross Well, they are statistical models of language. To be precise, the corpus they were trained on, and there are a number of other aspects, like representation, what cleanups were applied.
Yes, one way to use these models, the one that seems to fascinate most people (but IMHO not necessarily the most useful, depending upon what one wants to achieve), is to complete a prefix with a plausible ending, statistically speaking.
Generally, they are a compelling NLP technology.
But yes, if you generate text with an LLM, you generally get text that is basically a "probable" text, based upon the training and the hyperparameters. And let's be honest, people that rely upon the training data of an LLM as a "search engine data corpus", they do have a problem.
🙅 -
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross Well, they are statistical models of language. To be precise, the corpus they were trained on, and there are a number of other aspects, like representation, what cleanups were applied.
Yes, one way to use these models, the one that seems to fascinate most people (but IMHO not necessarily the most useful, depending upon what one wants to achieve), is to complete a prefix with a plausible ending, statistically speaking.
Generally, they are a compelling NLP technology.
-
@VictimOfSimony @ViennaMike@mastodon.social If I was suddenly planetary overlord, I'd ban the advertising industry. Seriously, it's cultural poison. Doesn't matter whether it's cheap spam or brand marketing for YSL, it's garbage all round, designed to manipulate us into doing things we wouldn't otherwise do (for someone else's profit).
As a #PublicPolicy wonk I'm already remembering how different our common law legal systems have become in just three centuries. Over here they're still worried about the anti-porn #Censorship case where the #SupremeCourt refused to define what they were censoring. They famously said, "I know it when I see it," grumble grumble prurient use in commerce, harrumph typical opinion of ordinary firmness, followed by thirty years of chipping off corners without admitting they were sort of making shit up.
-
@davidgerard @cstross The article you posted included a very useful study. However, contrary to the article, that study is NOT the only study on the topic. Other, far larger, more comprehensive, studies show measured, meaningful productivity improvements. You asked for measurements, so here they are: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566 and https://www.youtube.com/watch?v=tbDDYKRFjhk The Stanford study found that the average productivity boost is significant (~20%) but some teams see productivity decrease.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross I understand what you mean, but I’d say they’re simply databases. Spam generator is one of their many uses.
-
@davidgerard @cstross The article you posted included a very useful study. However, contrary to the article, that study is NOT the only study on the topic. Other, far larger, more comprehensive, studies show measured, meaningful productivity improvements. You asked for measurements, so here they are: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566 and https://www.youtube.com/watch?v=tbDDYKRFjhk The Stanford study found that the average productivity boost is significant (~20%) but some teams see productivity decrease.
@ViennaMike @cstross your paper's key metric is pull requests, which is addressed in the link you answered but didn't read (one jira leaves, three jiras enter). but thanks anyway.
(over and over i see this from AI guys, rebuttals that ignore the question asked)