LLMs are spam generators.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross
User:
Well, what’ve you got?Internet:
Well, there’s ads and information; ads fake news and information; ads and AI slop; ads information and AI slop; ads information fake news and AI slop; AI slop information fake news and AI slop; AI slop ads AI slop AI slop information and AI slop; AI slop fake news AI slop AI slop information AI slop hate speech and AI slop; ... -
@cstross "spicy autocomplete" has been one of my favorite descriptions of it.
@virtualbri @cstross
I feel like "spicy" has too much of a positive connotation. I struggle to come up with something better than the simple "big autocomplete" -
@cstross @ViennaMike Rubber ducks are useful when writing code, too. Doesn't make them intelligent.
@rupert @cstross @ViennaMike the LLMs are tools just like any other. They can be helpful or they can delete your entire production database...it just depends how you use them
-
@virtualbri @cstross "Mansplaining as a service" is my all-time favorite: https://phpc.social/@andrewfeeney/109466122845775778
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross People believed Eliza was intelligent.
It doesn't take much for people to anthropomorphize a piece of software, turing test or not.
-
@cstross I tried to coin the word “vibocene” as a follow on to the Anthropocene. I think I failed.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross LLMs are replacing every aspect of civilization - institutions, process knowledge, organizational competence - with civilization fan-fiction.
-
@cstross LLMs are replacing every aspect of civilization - institutions, process knowledge, organizational competence - with civilization fan-fiction.
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
-
@virtualbri @cstross
I feel like "spicy" has too much of a positive connotation. I struggle to come up with something better than the simple "big autocomplete" -
@cstross People believed Eliza was intelligent.
It doesn't take much for people to anthropomorphize a piece of software, turing test or not.
@Tubemeister @cstross It is even called the Eliza effect and it is so very relevant today.
-
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
we already had that, with the so-called "gish gallop"; this is, however, a way to automate bad-faith horseshit at scale.
this requires new patterns by those of us trying to uphold reality; reactionary argumentation doesn't work when the other side is able to out-scale your ability to respond.
I've found that taking control of the conversation by saying their arguments are based on faulty premises and reaffirming some aspect of reality that undermines their class of arguments is reasonably successful.
-
@cstross Love your fiction, can't agree on this take. Spam does not help with writing code, whereas LLM's can be extremely helpful.
LLM's are not generally intelligent. They can be immensely problematic, but to dismiss them as merely spam generators as you have, or "parrots" as others have is simply incorrect.
@ViennaMike
They're not "intelligent" at all. They generate a probable series of tokens based on a training corpus.And they are used to generate a lot of spam code that maintainers now have to deal with.
If we weren't in the middle of one of the most hellacious hype cycles in history, LLMs would be neat, but they have become a real scourge and their damage across a multitude of categories far outweighs any utility they have.
They are spam generators. That is all.
-
@virtualbri @cstross
I'll take "techbro autocomplete!"(I have a soft spot for "overengineered" things which can withstand many thousands of times the abuse necessary for the job)
-
undefined oblomov@sociale.network shared this topic on
-
@virtualbri @cstross
I'll take "techbro autocomplete!"(I have a soft spot for "overengineered" things which can withstand many thousands of times the abuse necessary for the job)
-
@jbowen @virtualbri @cstross I, for one, welcome our stochastic parrots overlords ! (I do not)
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross a lot of people believe LLMs are intelligent because a lot of people are really thick. Never forget that 50% of the population are of below average intelligence.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
#NightmareOnLLMstreet is my favorite hashtag those days
-
@virtualbri @cstross
Like a tardigrade :) -
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
-
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.