LLMs are spam generators.
-
@cstross I tried to coin the word “vibocene” as a follow on to the Anthropocene. I think I failed.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross LLMs are replacing every aspect of civilization - institutions, process knowledge, organizational competence - with civilization fan-fiction.
-
@cstross LLMs are replacing every aspect of civilization - institutions, process knowledge, organizational competence - with civilization fan-fiction.
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
-
@virtualbri @cstross
I feel like "spicy" has too much of a positive connotation. I struggle to come up with something better than the simple "big autocomplete" -
@cstross People believed Eliza was intelligent.
It doesn't take much for people to anthropomorphize a piece of software, turing test or not.
@Tubemeister @cstross It is even called the Eliza effect and it is so very relevant today.
-
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
we already had that, with the so-called "gish gallop"; this is, however, a way to automate bad-faith horseshit at scale.
this requires new patterns by those of us trying to uphold reality; reactionary argumentation doesn't work when the other side is able to out-scale your ability to respond.
I've found that taking control of the conversation by saying their arguments are based on faulty premises and reaffirming some aspect of reality that undermines their class of arguments is reasonably successful.
-
@cstross Love your fiction, can't agree on this take. Spam does not help with writing code, whereas LLM's can be extremely helpful.
LLM's are not generally intelligent. They can be immensely problematic, but to dismiss them as merely spam generators as you have, or "parrots" as others have is simply incorrect.
@ViennaMike
They're not "intelligent" at all. They generate a probable series of tokens based on a training corpus.And they are used to generate a lot of spam code that maintainers now have to deal with.
If we weren't in the middle of one of the most hellacious hype cycles in history, LLMs would be neat, but they have become a real scourge and their damage across a multitude of categories far outweighs any utility they have.
They are spam generators. That is all.
-
@virtualbri @cstross
I'll take "techbro autocomplete!"(I have a soft spot for "overengineered" things which can withstand many thousands of times the abuse necessary for the job)
-
undefined oblomov@sociale.network shared this topic on
-
@virtualbri @cstross
I'll take "techbro autocomplete!"(I have a soft spot for "overengineered" things which can withstand many thousands of times the abuse necessary for the job)
-
@jbowen @virtualbri @cstross I, for one, welcome our stochastic parrots overlords ! (I do not)
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross a lot of people believe LLMs are intelligent because a lot of people are really thick. Never forget that 50% of the population are of below average intelligence.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
#NightmareOnLLMstreet is my favorite hashtag those days
-
@virtualbri @cstross
Like a tardigrade :) -
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
-
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross All lines converge toward a single spamishing point.
-
@cstross Put another way: LLMs have revealed a zero-day exploit in human consciousness and culture: if you can manufacture plausibility at scale, you can bypass all of the accumulated wisdom of centuries of skepticism and critical thinking. Any fact-using profession is potentially vulnerable to this attack.
Another thing that Douglas Adams invented but we don't actually want (along with GPP : Genuine People Personalities™)
-
@ViennaMike You're using code generators. Not the same thing, frankly. Stop generalizing your experience as a developer to the public at large, who only see magic talking box.
If you like, put another way, you're taking the "good guy with a gun" side of the argument.

I mean, feel free, but own it. Intelligent well-informed adults frequently take opposite sides in this.

-
@cstross All lines converge toward a single spamishing point.
@cstross One of the execs at $CURRENT_OVERLORD wrote a blog post about passing the Turing Test. He raved about the AI that could fool him. But that famous feat has long ago ceased to measure the capabilities of a computer program, and instead started to measure the limitations of human perceptions.
When the current gen of chat bots can fool you, it isn't because they're matching levels of human capabilities now. They're exploiting levels of human weaknesses.
-
For decades #Turing test was the ironclad determinant for #AI quality.
Once it was trivially broken by now old models, the goalposts have shifted......it's now humanities last exam, HLE, BTW.
The world is full of spam generating humans.
One is in the Whitest house, putin is surrounded by them, and any large family gathering will contain 2-3 human spam generators the will jibber-jabber nonsensical human-like speech constantly.I'm told Turing also thought 30% convincing was enough to be noteworthy, presumably because some humans barely pass that.
