@glyph Did you quote post something?
-
@glyph Maybe! Or maybe it'll be like leaded gasoline where we never really are able to trace back which awful things were due to that source of lead and which things weren't because it all gets mixed up in the "heavy metal in head, things bad now" bucket.
@xgranade definitely going to make a killing when I put all this low-linear-algebra steel back on the market
-
@glyph I just keep coming back to that LLMs put our brains directly on the inner loop of an optimization algorithm — that's already true to some degree with advertising and social media engagement algorithms, but LLMs tighten that loop even more, and we don't know what that does to brains!
-
@mttaggart my suspicion is that when this happens we are going to find out that there is a huge predisposition component. there will be people who say "ah well. guess I'm a little rusty writing unit tests now, but time to get back at it" and we will have people who will go to sleep crying tears of frustration for the rest of their lives as they struggle to reclaim the feeling of being predictably productive again
@glyph Sure, although that first group may be said to be users, but not addicts, if that's the case. My suspicion is that time of use will be a factor there.
-
@glyph I just keep coming back to that LLMs put our brains directly on the inner loop of an optimization algorithm — that's already true to some degree with advertising and social media engagement algorithms, but LLMs tighten that loop even more, and we don't know what that does to brains!
-
@datarama @xgranade I have other anxiety issues (OCD does not really resonate broadly, but anxiety+ADHD can produce a similar effect) and I sometimes feel waves of INTENSE rage when I open up my Apple TV and it's playing a loud trailer for a show I don't really want to get into, or launch Steam and find that it's forgotten the preference to open to the Library and not Store tab. We need strict regulation for ALL of these attractive "algorithmic" nuisances
-
@clew @xgranade I mean with the GPT-5 / r/MyBoyfriendIsAI debacle (which is, I suppose, still ongoing) we saw that sometimes it literally is just _exactly_ love bombing. But there is a pretty significant difference in degree (if not, necessarily, in kind?) with telling the robot to write your CSS for you, rather than telling it to validate your feelings
-
@clew @xgranade I mean with the GPT-5 / r/MyBoyfriendIsAI debacle (which is, I suppose, still ongoing) we saw that sometimes it literally is just _exactly_ love bombing. But there is a pretty significant difference in degree (if not, necessarily, in kind?) with telling the robot to write your CSS for you, rather than telling it to validate your feelings
-
@xgranade looks like we're gonna find out
-
@glyph I just keep coming back to that LLMs put our brains directly on the inner loop of an optimization algorithm — that's already true to some degree with advertising and social media engagement algorithms, but LLMs tighten that loop even more, and we don't know what that does to brains!
-
@fnohe @glyph With advertising, a company is presumably trying to make numbers go up by intervening in a process (in the causal inference sense) that includes your brain. They're generally prevented by the limited choice of hypothetical interventions, though, each of which has to be written or made by a person.
For social media, that can be shortcutted dramatically by selecting existing posts to show you.
-
@fnohe @glyph With advertising, a company is presumably trying to make numbers go up by intervening in a process (in the causal inference sense) that includes your brain. They're generally prevented by the limited choice of hypothetical interventions, though, each of which has to be written or made by a person.
For social media, that can be shortcutted dramatically by selecting existing posts to show you.
@fnohe @glyph
For an LLM chatbot, they don't even need that. New interventions can be emitted programmatically, and your "engagement" with the chatbot measured as a result of those interventions.We don't really know what that does to brains, what effectively letting an LLM fuzz our personalities looks like as a mass psychological experiment.
-
@datarama @xgranade I have other anxiety issues (OCD does not really resonate broadly, but anxiety+ADHD can produce a similar effect) and I sometimes feel waves of INTENSE rage when I open up my Apple TV and it's playing a loud trailer for a show I don't really want to get into, or launch Steam and find that it's forgotten the preference to open to the Library and not Store tab. We need strict regulation for ALL of these attractive "algorithmic" nuisances
-
@semanticist @glyph it got restricted here along with the rest of the "New Psychoactive substances" (which do have short term beneficial effects, but tolerance and compulsive usage patterns arrive very quickly).
-
@glyph The addictive behavior isn't new, I've flagged it before. There is also a reason the meetup is called Claude Code Anonymous. What puzzles me is how dismissive people still are of LLMs, despite the mounting evidence to the contrary. I thought at this point we would be past that.
-
@glyph This feels about right, and with this, the next trick is building a culture of healing and support for when those who have fallen prey to this addiction are ready for change.
@mttaggart @glyph
not exactly on the spot but close to it on "culture of healing" i'd say, it speaks of "recovering prompt writers", don't know if you read this marvelous, lenghty piece: https://sightlessscribbles.com/the-colonization-of-confidence/ ? -
@glyph The addictive behavior isn't new, I've flagged it before. There is also a reason the meetup is called Claude Code Anonymous. What puzzles me is how dismissive people still are of LLMs, despite the mounting evidence to the contrary. I thought at this point we would be past that.
@mitsuhiko [citation needed]
-
@mitsuhiko [citation needed]
@mitsuhiko my assertion is that there's no evidence they're useful. There's LOADS of evidence that people SUBJECTIVELY FEEL that they are useful, but that is not the same thing. I subjectively feel that they are destructive and waste time. If you want to proceed past this disagreement you are going to need to bring methodologically credible evidence.
-
@mitsuhiko [citation needed]
@glyph I mean, I don't know what to tell you. Are you still really doubting that these things are useful? I've written so many times about this now, are you dismissing it? I can point you to code that I've written over the last seven days that in terms of complexity and utility, is way beyond what we've been able to push out over Christmas. (eg: https://github.com/mitsuhiko/tankgame which is public)
Like, how can you doubt this? It just boggles my mind.
-
@glyph Before I would often spend an hour or more fighting the nightmare that is the android build system before finding an incantation like implementation('com.google.guava:guava:32.1.3-jre') { exclude group: 'com.google.j2objc', module: 'j2objc-annotations' } on stack overflow with an explanation that this is the only way to include libraries foo and bar in the same project. Now an llm can immediately fix this for me, and I no longer dream about including violence on whoever invented gradle.
-
@datarama @xgranade I have other anxiety issues (OCD does not really resonate broadly, but anxiety+ADHD can produce a similar effect) and I sometimes feel waves of INTENSE rage when I open up my Apple TV and it's playing a loud trailer for a show I don't really want to get into, or launch Steam and find that it's forgotten the preference to open to the Library and not Store tab. We need strict regulation for ALL of these attractive "algorithmic" nuisances