LLMs are spam generators.
-
@VictimOfSimony @ViennaMike@mastodon.social If I was suddenly planetary overlord, I'd ban the advertising industry. Seriously, it's cultural poison. Doesn't matter whether it's cheap spam or brand marketing for YSL, it's garbage all round, designed to manipulate us into doing things we wouldn't otherwise do (for someone else's profit).
@cstross @VictimOfSimony it's nice to hear we're not the only ones who've come to that conclusion (sigh)
-
@dragonfrog @cstross @ViennaMike
I more or less agree. I'm just not looking forward to the backlash those of us in the machine learning space will face (I'm in computer vision/medical).
@ThePowerNap @dragonfrog @cstross @ViennaMike I suspect that it'll be good for the ML space -- the conflation of LLM prompting with the harder skills required to actually train a model has been disastrous for the market. The industry isn't sophisticated enough for people to distinguish between the two skillsets, so the latter market has just been absolutely deleted in Australia.
-
@cstross I disagree with you there.
I still believe the perfect use case for AI in content creation is to generate a virtual fan boy to tell an author that they are canonically wrong.
Self hosted self doubt as it were and less public than Usenet.@NefariousCelt But I don't NEED that! I've developed that skill for myself, the hard way!
-
@cstross would you have imagined that back when you wrote Accelerando?
@root42 No, but I coined "the spamularity" in Rule 34, which I wrote in 2008/09.
-
For decades #Turing test was the ironclad determinant for #AI quality.
Once it was trivially broken by now old models, the goalposts have shifted......it's now humanities last exam, HLE, BTW.
The world is full of spam generating humans.
One is in the Whitest house, putin is surrounded by them, and any large family gathering will contain 2-3 human spam generators the will jibber-jabber nonsensical human-like speech constantly.@n_dimension @cstross "any large family gathering will contain 2-3 human spam generators the will jibber-jabber nonsensical human-like speech constantly." - I'm so glad I'm not the only one with this feeling! :D On the other hand human intelligence was never properly defined. IMO shifting goalposts here indicate that we have an improved, yet still incomplete understanding about how intelligence may be defined, rather than dismissing intelligence because we don't like the looks of it. -
Yet Saudi Arabia alone is laundering $100 billion of its looted national treasury on spam generation?
Perhaps 10% is being used on the spam-making software, the rest is being spent on a fossil fuel funded fascist movement, complete with civil rights erosions, state surveillance platforms, goon squads, concentration camps, and financial fraud.
We continue to underestimate how badly the fossil fuel industry wants permanent rule.
https://www.bloomberg.com/news/articles/2025-09-16/ai-deals-saudi-arabia-eyes-artificial-intelligence-partnership-with-pe-firmshttps://www.semafor.com/article/11/07/2025/uae-says-its-invested-148b-in-ai-since-2024
-
@VictimOfSimony @ViennaMike@mastodon.social If I was suddenly planetary overlord, I'd ban the advertising industry. Seriously, it's cultural poison. Doesn't matter whether it's cheap spam or brand marketing for YSL, it's garbage all round, designed to manipulate us into doing things we wouldn't otherwise do (for someone else's profit).
@cstross @VictimOfSimony also i'm pretty sure from a couple of years of observation that actually, LLM code is spam too
-
@cstross @VictimOfSimony also i'm pretty sure from a couple of years of observation that actually, LLM code is spam too
@cstross @VictimOfSimony LLM code's real trick is to induce gambling addiction in its suckers, who will swear it works great just one more prompt bro https://pivot-to-ai.com/2025/06/05/generative-ai-runs-on-gambling-addiction-just-one-more-prompt-bro/
-
@NefariousCelt But I don't NEED that! I've developed that skill for myself, the hard way!
A product where the target demographic is Kevin J Anderson.
-
@ThePowerNap @cstross @ViennaMike
Sure there's some real helpful use cases for LLMs.I reckon if the LLM industry manages to settle down into supplying only the helpful sensible use cases for LLMs, it will be about as big - in terms of annual dollar throughput - as eg the electric kettle manufacturing sector.
But to get to that sustainable level the LLM industry would have to shrink its market cap by 3 orders of magnitude. And it probably can't do that without collapsing entirely.
@dragonfrog @ThePowerNap @cstross @ViennaMike i have occasionally been using an AI tool called Phind. It was basically a “help with coding” tool. It recently rewrote itself completely into a generic AI tool… and then a month later closed.
I cant help but wonder… Is this the start of the great collapse?
-
@cstross Kessler Syndrome for the internet
-
A product where the target demographic is Kevin J Anderson.
@david_chisnall @cstross I think I get this reference. (Confused Captain America face) But I was leaning into how an author needs to be told how they do not understand his own oeuvre. Something AI would excel at.
-
@dragonfrog @cstross @ViennaMike
I more or less agree. I'm just not looking forward to the backlash those of us in the machine learning space will face (I'm in computer vision/medical).
@ThePowerNap @dragonfrog @cstross @ViennaMike The best purpose of a LLM system is to provide a natural language interface to a complex but not mission critical system such as home automation, media libraries and switchboards.
Machine learning on the other hand should lean into seeing without eyes (is a dramatic jump in intensity cancer?)
Both of these cases should be powered through a single IEC C13 socket and not some gW data centre.
-
@cstross LLMs always remind me of the protagonists of “Accelerando” finally making contact with a whole system of alien AIs, only to learn that they’re all just jumped-up versions of the Nigerian prince scam.
-
@cstross LLMs are replacing every aspect of civilization - institutions, process knowledge, organizational competence - with civilization fan-fiction.
@cstross Because so many people boosted my post, I want to explore this topic in more detail.
First, a thought experiment: suppose we had cheaply scalable mind control, even if partially effective (LLMs are not mind control but they touch some of the same wires). Democracy would end, since the controllers could sway elections at will. The so-called free market would instantly become a command economy. Science, and any other activity requiring independent thought, would be a dead letter.
-
@cstross Because so many people boosted my post, I want to explore this topic in more detail.
First, a thought experiment: suppose we had cheaply scalable mind control, even if partially effective (LLMs are not mind control but they touch some of the same wires). Democracy would end, since the controllers could sway elections at will. The so-called free market would instantly become a command economy. Science, and any other activity requiring independent thought, would be a dead letter.
@cstross In the book _Rainbows End_, Vinge proposes a form of mind control called "YGBM" technology, which stands for "You Gotta Believe Me", a way to get affected individuals to accept arbitrary propositions as truth.
What this has in common with LLMs is technologically enhanced plausibility.
-
@cstross In the book _Rainbows End_, Vinge proposes a form of mind control called "YGBM" technology, which stands for "You Gotta Believe Me", a way to get affected individuals to accept arbitrary propositions as truth.
What this has in common with LLMs is technologically enhanced plausibility.
@talin I was going to mention "Rainbows End"!
-
@cstross In the book _Rainbows End_, Vinge proposes a form of mind control called "YGBM" technology, which stands for "You Gotta Believe Me", a way to get affected individuals to accept arbitrary propositions as truth.
What this has in common with LLMs is technologically enhanced plausibility.
@cstross Now, we know from the earliest days of machine learning that algorithms are capable of exploiting any loophole in their fitness function with seemingly godlike competence.
Unfortunately, we can't make a fitness function for intelligence, truthfulness, or integrity. What we can do, at great expense, is make a fitness function for plausibility.
When an LLM goes through its reinforcement learning, it's behavior is rewarded based on whether some human reviewer believed the result.
-
@cstross Now, we know from the earliest days of machine learning that algorithms are capable of exploiting any loophole in their fitness function with seemingly godlike competence.
Unfortunately, we can't make a fitness function for intelligence, truthfulness, or integrity. What we can do, at great expense, is make a fitness function for plausibility.
When an LLM goes through its reinforcement learning, it's behavior is rewarded based on whether some human reviewer believed the result.
@cstross As LLMs improve, they will get better at chasing their fitness function, that is, at convincing people that what they say is true. This is what LLMs have in common with YGBM.
Those who know history know that the invention of cheap printing sparked of centuries of bloody religious conflict. In this regard, LLMs are like "hold my beer".
-
Yet Saudi Arabia alone is laundering $100 billion of its looted national treasury on spam generation?
Perhaps 10% is being used on the spam-making software, the rest is being spent on a fossil fuel funded fascist movement, complete with civil rights erosions, state surveillance platforms, goon squads, concentration camps, and financial fraud.
We continue to underestimate how badly the fossil fuel industry wants permanent rule.
https://www.bloomberg.com/news/articles/2025-09-16/ai-deals-saudi-arabia-eyes-artificial-intelligence-partnership-with-pe-firmshttps://www.semafor.com/article/11/07/2025/uae-says-its-invested-148b-in-ai-since-2024
@Npars01 Yep, 💯