LLMs are spam generators.
-
A product where the target demographic is Kevin J Anderson.
@david_chisnall @cstross I think I get this reference. (Confused Captain America face) But I was leaning into how an author needs to be told how they do not understand his own oeuvre. Something AI would excel at.
-
@dragonfrog @cstross @ViennaMike
I more or less agree. I'm just not looking forward to the backlash those of us in the machine learning space will face (I'm in computer vision/medical).
@ThePowerNap @dragonfrog @cstross @ViennaMike The best purpose of a LLM system is to provide a natural language interface to a complex but not mission critical system such as home automation, media libraries and switchboards.
Machine learning on the other hand should lean into seeing without eyes (is a dramatic jump in intensity cancer?)
Both of these cases should be powered through a single IEC C13 socket and not some gW data centre.
-
@cstross LLMs always remind me of the protagonists of “Accelerando” finally making contact with a whole system of alien AIs, only to learn that they’re all just jumped-up versions of the Nigerian prince scam.
-
@cstross LLMs are replacing every aspect of civilization - institutions, process knowledge, organizational competence - with civilization fan-fiction.
@cstross Because so many people boosted my post, I want to explore this topic in more detail.
First, a thought experiment: suppose we had cheaply scalable mind control, even if partially effective (LLMs are not mind control but they touch some of the same wires). Democracy would end, since the controllers could sway elections at will. The so-called free market would instantly become a command economy. Science, and any other activity requiring independent thought, would be a dead letter.
-
@cstross Because so many people boosted my post, I want to explore this topic in more detail.
First, a thought experiment: suppose we had cheaply scalable mind control, even if partially effective (LLMs are not mind control but they touch some of the same wires). Democracy would end, since the controllers could sway elections at will. The so-called free market would instantly become a command economy. Science, and any other activity requiring independent thought, would be a dead letter.
@cstross In the book _Rainbows End_, Vinge proposes a form of mind control called "YGBM" technology, which stands for "You Gotta Believe Me", a way to get affected individuals to accept arbitrary propositions as truth.
What this has in common with LLMs is technologically enhanced plausibility.
-
@cstross In the book _Rainbows End_, Vinge proposes a form of mind control called "YGBM" technology, which stands for "You Gotta Believe Me", a way to get affected individuals to accept arbitrary propositions as truth.
What this has in common with LLMs is technologically enhanced plausibility.
@talin I was going to mention "Rainbows End"!
-
@cstross In the book _Rainbows End_, Vinge proposes a form of mind control called "YGBM" technology, which stands for "You Gotta Believe Me", a way to get affected individuals to accept arbitrary propositions as truth.
What this has in common with LLMs is technologically enhanced plausibility.
@cstross Now, we know from the earliest days of machine learning that algorithms are capable of exploiting any loophole in their fitness function with seemingly godlike competence.
Unfortunately, we can't make a fitness function for intelligence, truthfulness, or integrity. What we can do, at great expense, is make a fitness function for plausibility.
When an LLM goes through its reinforcement learning, it's behavior is rewarded based on whether some human reviewer believed the result.
-
@cstross Now, we know from the earliest days of machine learning that algorithms are capable of exploiting any loophole in their fitness function with seemingly godlike competence.
Unfortunately, we can't make a fitness function for intelligence, truthfulness, or integrity. What we can do, at great expense, is make a fitness function for plausibility.
When an LLM goes through its reinforcement learning, it's behavior is rewarded based on whether some human reviewer believed the result.
@cstross As LLMs improve, they will get better at chasing their fitness function, that is, at convincing people that what they say is true. This is what LLMs have in common with YGBM.
Those who know history know that the invention of cheap printing sparked of centuries of bloody religious conflict. In this regard, LLMs are like "hold my beer".
-
Yet Saudi Arabia alone is laundering $100 billion of its looted national treasury on spam generation?
Perhaps 10% is being used on the spam-making software, the rest is being spent on a fossil fuel funded fascist movement, complete with civil rights erosions, state surveillance platforms, goon squads, concentration camps, and financial fraud.
We continue to underestimate how badly the fossil fuel industry wants permanent rule.
https://www.bloomberg.com/news/articles/2025-09-16/ai-deals-saudi-arabia-eyes-artificial-intelligence-partnership-with-pe-firmshttps://www.semafor.com/article/11/07/2025/uae-says-its-invested-148b-in-ai-since-2024
@Npars01 Yep, 💯
-
Yet Saudi Arabia alone is laundering $100 billion of its looted national treasury on spam generation?
Perhaps 10% is being used on the spam-making software, the rest is being spent on a fossil fuel funded fascist movement, complete with civil rights erosions, state surveillance platforms, goon squads, concentration camps, and financial fraud.
We continue to underestimate how badly the fossil fuel industry wants permanent rule.
https://www.bloomberg.com/news/articles/2025-09-16/ai-deals-saudi-arabia-eyes-artificial-intelligence-partnership-with-pe-firmshttps://www.semafor.com/article/11/07/2025/uae-says-its-invested-148b-in-ai-since-2024
@Npars01 @cstross
Spam making software also assists the nazis too.
"The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction (i.e., the reality of experience) and the distinction between true and false (i.e., the standards of thought) no longer exist." ~ Hannah Arendt -
@cstross Well, they are statistical models of language. To be precise, the corpus they were trained on, and there are a number of other aspects, like representation, what cleanups were applied.
Yes, one way to use these models, the one that seems to fascinate most people (but IMHO not necessarily the most useful, depending upon what one wants to achieve), is to complete a prefix with a plausible ending, statistically speaking.
Generally, they are a compelling NLP technology.
But yes, if you generate text with an LLM, you generally get text that is basically a "probable" text, based upon the training and the hyperparameters. And let's be honest, people that rely upon the training data of an LLM as a "search engine data corpus", they do have a problem.
🙅 -
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross Well, they are statistical models of language. To be precise, the corpus they were trained on, and there are a number of other aspects, like representation, what cleanups were applied.
Yes, one way to use these models, the one that seems to fascinate most people (but IMHO not necessarily the most useful, depending upon what one wants to achieve), is to complete a prefix with a plausible ending, statistically speaking.
Generally, they are a compelling NLP technology.
-
@VictimOfSimony @ViennaMike@mastodon.social If I was suddenly planetary overlord, I'd ban the advertising industry. Seriously, it's cultural poison. Doesn't matter whether it's cheap spam or brand marketing for YSL, it's garbage all round, designed to manipulate us into doing things we wouldn't otherwise do (for someone else's profit).
As a #PublicPolicy wonk I'm already remembering how different our common law legal systems have become in just three centuries. Over here they're still worried about the anti-porn #Censorship case where the #SupremeCourt refused to define what they were censoring. They famously said, "I know it when I see it," grumble grumble prurient use in commerce, harrumph typical opinion of ordinary firmness, followed by thirty years of chipping off corners without admitting they were sort of making shit up.
-
@davidgerard @cstross The article you posted included a very useful study. However, contrary to the article, that study is NOT the only study on the topic. Other, far larger, more comprehensive, studies show measured, meaningful productivity improvements. You asked for measurements, so here they are: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566 and https://www.youtube.com/watch?v=tbDDYKRFjhk The Stanford study found that the average productivity boost is significant (~20%) but some teams see productivity decrease.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross I understand what you mean, but I’d say they’re simply databases. Spam generator is one of their many uses.
-
@davidgerard @cstross The article you posted included a very useful study. However, contrary to the article, that study is NOT the only study on the topic. Other, far larger, more comprehensive, studies show measured, meaningful productivity improvements. You asked for measurements, so here they are: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566 and https://www.youtube.com/watch?v=tbDDYKRFjhk The Stanford study found that the average productivity boost is significant (~20%) but some teams see productivity decrease.
@ViennaMike @cstross your paper's key metric is pull requests, which is addressed in the link you answered but didn't read (one jira leaves, three jiras enter). but thanks anyway.
(over and over i see this from AI guys, rebuttals that ignore the question asked)
-
@cstross Kessler Syndrome for the internet
-
@ThePowerNap @dragonfrog @cstross @ViennaMike I suspect that it'll be good for the ML space -- the conflation of LLM prompting with the harder skills required to actually train a model has been disastrous for the market. The industry isn't sophisticated enough for people to distinguish between the two skillsets, so the latter market has just been absolutely deleted in Australia.
@ludicity @dragonfrog @cstross @ViennaMike
I'm kinda betting on this, but it doesn't mean the next couple of years won't be painful.
-
@ThePowerNap @dragonfrog @cstross @ViennaMike The best purpose of a LLM system is to provide a natural language interface to a complex but not mission critical system such as home automation, media libraries and switchboards.
Machine learning on the other hand should lean into seeing without eyes (is a dramatic jump in intensity cancer?)
Both of these cases should be powered through a single IEC C13 socket and not some gW data centre.
@NefariousCelt @dragonfrog @cstross @ViennaMike
95% agree. I'm currently working an LLM project that is interfacing with something a touch more than home automation. Aside from fine tuning the model and putting constraints on the output (think https://github.com/dottxt-ai/outlines), you can use existing approved software as validation of llm output.
All of this requires in-depth knowlege, patience and exaustive testing.
-
@NefariousCelt @dragonfrog @cstross @ViennaMike
95% agree. I'm currently working an LLM project that is interfacing with something a touch more than home automation. Aside from fine tuning the model and putting constraints on the output (think https://github.com/dottxt-ai/outlines), you can use existing approved software as validation of llm output.
All of this requires in-depth knowlege, patience and exaustive testing.
@ThePowerNap @dragonfrog @cstross @ViennaMike I suspect I am coming from a very risk averse standpoint. The “summarise this” is what scares me with LLM marketing. I have been burned by poor software requirement specifications. Language simplification can do that in multiple sectors. But yeah mapping natural language to a fixed set of goals, that’s fine. Trust AI with my life, no. Sudden random steering inputs WTAF stelantis.