LLMs are spam generators.
-
@cstross Love your fiction, can't agree on this take. Spam does not help with writing code, whereas LLM's can be extremely helpful.
LLM's are not generally intelligent. They can be immensely problematic, but to dismiss them as merely spam generators as you have, or "parrots" as others have is simply incorrect.
@ViennaMike You're using code generators. Not the same thing, frankly. Stop generalizing your experience as a developer to the public at large, who only see magic talking box.
-
@cstross "spicy autocomplete" has been one of my favorite descriptions of it.
@virtualbri @cstross "Mansplaining as a service" is my all-time favorite: https://phpc.social/@andrewfeeney/109466122845775778
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross Slop in the Faith..."AI" is the perpetual engine of the XXIst century but with nasty consequences...
-
@ViennaMike You're using code generators. Not the same thing, frankly. Stop generalizing your experience as a developer to the public at large, who only see magic talking box.
@cstross @ViennaMike Rubber ducks are useful when writing code, too. Doesn't make them intelligent.
-
@ViennaMike You're using code generators. Not the same thing, frankly. Stop generalizing your experience as a developer to the public at large, who only see magic talking box.
There is some real utility under the hood, but goddamn.
The pushback from higher utility bills and GenAI diarrhea is justified, and a huge black eye for practical use.
-
@cstross Love your fiction, can't agree on this take. Spam does not help with writing code, whereas LLM's can be extremely helpful.
LLM's are not generally intelligent. They can be immensely problematic, but to dismiss them as merely spam generators as you have, or "parrots" as others have is simply incorrect.
@ViennaMike if the purpose of a system is what it does… they are spam generators. They generate meaningless nonsense at vast scale to destroy any ability for information correlation or retrieval at large, for profit.
Anything else in a meaningless subpercent of usage.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
For decades #Turing test was the ironclad determinant for #AI quality.
Once it was trivially broken by now old models, the goalposts have shifted......it's now humanities last exam, HLE, BTW.
The world is full of spam generating humans.
One is in the Whitest house, putin is surrounded by them, and any large family gathering will contain 2-3 human spam generators the will jibber-jabber nonsensical human-like speech constantly. -
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross
User:
Well, what’ve you got?Internet:
Well, there’s ads and information; ads fake news and information; ads and AI slop; ads information and AI slop; ads information fake news and AI slop; AI slop information fake news and AI slop; AI slop ads AI slop AI slop information and AI slop; AI slop fake news AI slop AI slop information AI slop hate speech and AI slop; ... -
@cstross "spicy autocomplete" has been one of my favorite descriptions of it.
@virtualbri @cstross
I feel like "spicy" has too much of a positive connotation. I struggle to come up with something better than the simple "big autocomplete" -
@cstross @ViennaMike Rubber ducks are useful when writing code, too. Doesn't make them intelligent.
@rupert @cstross @ViennaMike the LLMs are tools just like any other. They can be helpful or they can delete your entire production database...it just depends how you use them
-
@virtualbri @cstross "Mansplaining as a service" is my all-time favorite: https://phpc.social/@andrewfeeney/109466122845775778
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross People believed Eliza was intelligent.
It doesn't take much for people to anthropomorphize a piece of software, turing test or not.
-
@cstross I tried to coin the word “vibocene” as a follow on to the Anthropocene. I think I failed.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross LLMs are replacing every aspect of civilization - institutions, process knowledge, organizational competence - with civilization fan-fiction.
-
@cstross LLMs are replacing every aspect of civilization - institutions, process knowledge, organizational competence - with civilization fan-fiction.
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
-
@virtualbri @cstross
I feel like "spicy" has too much of a positive connotation. I struggle to come up with something better than the simple "big autocomplete" -
@cstross People believed Eliza was intelligent.
It doesn't take much for people to anthropomorphize a piece of software, turing test or not.
@Tubemeister @cstross It is even called the Eliza effect and it is so very relevant today.
-
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
we already had that, with the so-called "gish gallop"; this is, however, a way to automate bad-faith horseshit at scale.
this requires new patterns by those of us trying to uphold reality; reactionary argumentation doesn't work when the other side is able to out-scale your ability to respond.
I've found that taking control of the conversation by saying their arguments are based on faulty premises and reaffirming some aspect of reality that undermines their class of arguments is reasonably successful.
-
@cstross Love your fiction, can't agree on this take. Spam does not help with writing code, whereas LLM's can be extremely helpful.
LLM's are not generally intelligent. They can be immensely problematic, but to dismiss them as merely spam generators as you have, or "parrots" as others have is simply incorrect.
@ViennaMike
They're not "intelligent" at all. They generate a probable series of tokens based on a training corpus.And they are used to generate a lot of spam code that maintainers now have to deal with.
If we weren't in the middle of one of the most hellacious hype cycles in history, LLMs would be neat, but they have become a real scourge and their damage across a multitude of categories far outweighs any utility they have.
They are spam generators. That is all.
-
@virtualbri @cstross
I'll take "techbro autocomplete!"(I have a soft spot for "overengineered" things which can withstand many thousands of times the abuse necessary for the job)
-
undefined oblomov@sociale.network shared this topic on