LLMs are spam generators.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross I understand what you mean, but I’d say they’re simply databases. Spam generator is one of their many uses.
-
@davidgerard @cstross The article you posted included a very useful study. However, contrary to the article, that study is NOT the only study on the topic. Other, far larger, more comprehensive, studies show measured, meaningful productivity improvements. You asked for measurements, so here they are: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4945566 and https://www.youtube.com/watch?v=tbDDYKRFjhk The Stanford study found that the average productivity boost is significant (~20%) but some teams see productivity decrease.
@ViennaMike @cstross your paper's key metric is pull requests, which is addressed in the link you answered but didn't read (one jira leaves, three jiras enter). but thanks anyway.
(over and over i see this from AI guys, rebuttals that ignore the question asked)
-
@cstross Kessler Syndrome for the internet
-
@ThePowerNap @dragonfrog @cstross @ViennaMike I suspect that it'll be good for the ML space -- the conflation of LLM prompting with the harder skills required to actually train a model has been disastrous for the market. The industry isn't sophisticated enough for people to distinguish between the two skillsets, so the latter market has just been absolutely deleted in Australia.
@ludicity @dragonfrog @cstross @ViennaMike
I'm kinda betting on this, but it doesn't mean the next couple of years won't be painful.
-
@ThePowerNap @dragonfrog @cstross @ViennaMike The best purpose of a LLM system is to provide a natural language interface to a complex but not mission critical system such as home automation, media libraries and switchboards.
Machine learning on the other hand should lean into seeing without eyes (is a dramatic jump in intensity cancer?)
Both of these cases should be powered through a single IEC C13 socket and not some gW data centre.
@NefariousCelt @dragonfrog @cstross @ViennaMike
95% agree. I'm currently working an LLM project that is interfacing with something a touch more than home automation. Aside from fine tuning the model and putting constraints on the output (think https://github.com/dottxt-ai/outlines), you can use existing approved software as validation of llm output.
All of this requires in-depth knowlege, patience and exaustive testing.
-
@NefariousCelt @dragonfrog @cstross @ViennaMike
95% agree. I'm currently working an LLM project that is interfacing with something a touch more than home automation. Aside from fine tuning the model and putting constraints on the output (think https://github.com/dottxt-ai/outlines), you can use existing approved software as validation of llm output.
All of this requires in-depth knowlege, patience and exaustive testing.
@ThePowerNap @dragonfrog @cstross @ViennaMike I suspect I am coming from a very risk averse standpoint. The “summarise this” is what scares me with LLM marketing. I have been burned by poor software requirement specifications. Language simplification can do that in multiple sectors. But yeah mapping natural language to a fixed set of goals, that’s fine. Trust AI with my life, no. Sudden random steering inputs WTAF stelantis.
-
@talin @cstross I don't think there was ever a time in human history when critical thinkers were more than a small minority of the population. LLMs are unprecedented in the scale at which they can produce misinformation. Who knows? Exposed to so much bullshit, people may become more critical of what they see. It's already happening with images and videos, right?
-
@dragonfrog @ThePowerNap @cstross @ViennaMike i have occasionally been using an AI tool called Phind. It was basically a “help with coding” tool. It recently rewrote itself completely into a generic AI tool… and then a month later closed.
I cant help but wonder… Is this the start of the great collapse?
@leadegroot @dragonfrog @ThePowerNap
I can certainly see a bubble bursting, but I'm not sure that Phind is a sign. Hard to compete with those with billions to develop core models, and not going to hit any profit home runs reselling access to other's models. -
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross
The Turing Test will not fail an LLM for the same reason humans are ready to accept them as persons: humans evolved with a hardcoded behavior to perceive any language-using process as a cognitive entity and create a mental model of it. Rejecting that behavior takes a metacognitive ability that has not been taught to most people, so the problem isn’t rampant stupidity (look at how many clearly intelligent AI researchers fail this test) but rather ignorance, sometimes wilful, granted. -
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross we have to find ways to do something useful with spam generators otherwise we might lose the social permission to generate more spam!
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross 3 years ago I came up with https://github.com/tanepiper/Stochastic-Parrot as a way to test the then new OpenAI APIs - after a month it became very clear to me that all someone needs is $100 and a GitHub pipeline and you could quite easily create a spam farm. No need to hire pesky humans for $1 a day - it was probably the first job taken by AI
-
@cstross
The Turing Test will not fail an LLM for the same reason humans are ready to accept them as persons: humans evolved with a hardcoded behavior to perceive any language-using process as a cognitive entity and create a mental model of it. Rejecting that behavior takes a metacognitive ability that has not been taught to most people, so the problem isn’t rampant stupidity (look at how many clearly intelligent AI researchers fail this test) but rather ignorance, sometimes wilful, granted.This. And it doesn't help that a huge portion of postmodern philosophy is built on the ideas of the linguistic turn.
I never felt comfortable with this line of thinking and if nothing else, I appreciate the developments of and around LLMs to finally putting some hard and lasting nails into the coffin of the idea of linguistic process having correspondence with cognition.
-
@cstross Love your fiction, can't agree on this take. Spam does not help with writing code, whereas LLM's can be extremely helpful.
LLM's are not generally intelligent. They can be immensely problematic, but to dismiss them as merely spam generators as you have, or "parrots" as others have is simply incorrect.
@ViennaMike @cstross Just going empirically, if you have to go back in and do massive clean-up of the output that is supposed to represent the end-state of a product, it’s probably spam or at the very least spam-like.
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
@cstross I believe we had "spam" long before LLMs, and it was generated just fine. Blame computers!
-
LLMs are spam generators. That is all.
They're designed to generate plausibly human-like text well enough to pass a generic Turing Test. That's why people believe they're "intelligent".
But really, all they are is spam generators.
We have hit the spamularity.
-
@ViennaMike @cstross Just going empirically, if you have to go back in and do massive clean-up of the output that is supposed to represent the end-state of a product, it’s probably spam or at the very least spam-like.
@mrjunge If you have to spend almost as much time, or more, cleaning up an LLM-assisted product as you originally saved, I agree. LLM's can certainly generate spam or spam-like output at times. But to claim that that's ALL it does is not correct. One company found that initially, their software code writing gains were eaten up by quality control rework, but that after learning how to better use the tools, when to use them, and when not to use them, production went up 30% with no loss of quality.