@glyph Did you quote post something?
-
@glyph I just keep coming back to that LLMs put our brains directly on the inner loop of an optimization algorithm — that's already true to some degree with advertising and social media engagement algorithms, but LLMs tighten that loop even more, and we don't know what that does to brains!
-
@datarama @xgranade I have other anxiety issues (OCD does not really resonate broadly, but anxiety+ADHD can produce a similar effect) and I sometimes feel waves of INTENSE rage when I open up my Apple TV and it's playing a loud trailer for a show I don't really want to get into, or launch Steam and find that it's forgotten the preference to open to the Library and not Store tab. We need strict regulation for ALL of these attractive "algorithmic" nuisances
-
@clew @xgranade I mean with the GPT-5 / r/MyBoyfriendIsAI debacle (which is, I suppose, still ongoing) we saw that sometimes it literally is just _exactly_ love bombing. But there is a pretty significant difference in degree (if not, necessarily, in kind?) with telling the robot to write your CSS for you, rather than telling it to validate your feelings
-
@clew @xgranade I mean with the GPT-5 / r/MyBoyfriendIsAI debacle (which is, I suppose, still ongoing) we saw that sometimes it literally is just _exactly_ love bombing. But there is a pretty significant difference in degree (if not, necessarily, in kind?) with telling the robot to write your CSS for you, rather than telling it to validate your feelings
-
@xgranade looks like we're gonna find out
-
@glyph I just keep coming back to that LLMs put our brains directly on the inner loop of an optimization algorithm — that's already true to some degree with advertising and social media engagement algorithms, but LLMs tighten that loop even more, and we don't know what that does to brains!
-
@fnohe @glyph With advertising, a company is presumably trying to make numbers go up by intervening in a process (in the causal inference sense) that includes your brain. They're generally prevented by the limited choice of hypothetical interventions, though, each of which has to be written or made by a person.
For social media, that can be shortcutted dramatically by selecting existing posts to show you.
-
@fnohe @glyph With advertising, a company is presumably trying to make numbers go up by intervening in a process (in the causal inference sense) that includes your brain. They're generally prevented by the limited choice of hypothetical interventions, though, each of which has to be written or made by a person.
For social media, that can be shortcutted dramatically by selecting existing posts to show you.
@fnohe @glyph
For an LLM chatbot, they don't even need that. New interventions can be emitted programmatically, and your "engagement" with the chatbot measured as a result of those interventions.We don't really know what that does to brains, what effectively letting an LLM fuzz our personalities looks like as a mass psychological experiment.
-
@datarama @xgranade I have other anxiety issues (OCD does not really resonate broadly, but anxiety+ADHD can produce a similar effect) and I sometimes feel waves of INTENSE rage when I open up my Apple TV and it's playing a loud trailer for a show I don't really want to get into, or launch Steam and find that it's forgotten the preference to open to the Library and not Store tab. We need strict regulation for ALL of these attractive "algorithmic" nuisances
-
@semanticist @glyph it got restricted here along with the rest of the "New Psychoactive substances" (which do have short term beneficial effects, but tolerance and compulsive usage patterns arrive very quickly).
-
@glyph The addictive behavior isn't new, I've flagged it before. There is also a reason the meetup is called Claude Code Anonymous. What puzzles me is how dismissive people still are of LLMs, despite the mounting evidence to the contrary. I thought at this point we would be past that.
-
@glyph This feels about right, and with this, the next trick is building a culture of healing and support for when those who have fallen prey to this addiction are ready for change.
@mttaggart @glyph
not exactly on the spot but close to it on "culture of healing" i'd say, it speaks of "recovering prompt writers", don't know if you read this marvelous, lenghty piece: https://sightlessscribbles.com/the-colonization-of-confidence/ ? -
@glyph The addictive behavior isn't new, I've flagged it before. There is also a reason the meetup is called Claude Code Anonymous. What puzzles me is how dismissive people still are of LLMs, despite the mounting evidence to the contrary. I thought at this point we would be past that.
@mitsuhiko [citation needed]
-
@mitsuhiko [citation needed]
@mitsuhiko my assertion is that there's no evidence they're useful. There's LOADS of evidence that people SUBJECTIVELY FEEL that they are useful, but that is not the same thing. I subjectively feel that they are destructive and waste time. If you want to proceed past this disagreement you are going to need to bring methodologically credible evidence.
-
@mitsuhiko [citation needed]
@glyph I mean, I don't know what to tell you. Are you still really doubting that these things are useful? I've written so many times about this now, are you dismissing it? I can point you to code that I've written over the last seven days that in terms of complexity and utility, is way beyond what we've been able to push out over Christmas. (eg: https://github.com/mitsuhiko/tankgame which is public)
Like, how can you doubt this? It just boggles my mind.
-
@glyph Before I would often spend an hour or more fighting the nightmare that is the android build system before finding an incantation like implementation('com.google.guava:guava:32.1.3-jre') { exclude group: 'com.google.j2objc', module: 'j2objc-annotations' } on stack overflow with an explanation that this is the only way to include libraries foo and bar in the same project. Now an llm can immediately fix this for me, and I no longer dream about including violence on whoever invented gradle.
-
@datarama @xgranade I have other anxiety issues (OCD does not really resonate broadly, but anxiety+ADHD can produce a similar effect) and I sometimes feel waves of INTENSE rage when I open up my Apple TV and it's playing a loud trailer for a show I don't really want to get into, or launch Steam and find that it's forgotten the preference to open to the Library and not Store tab. We need strict regulation for ALL of these attractive "algorithmic" nuisances
-
-
@glyph I mean, I don't know what to tell you. Are you still really doubting that these things are useful? I've written so many times about this now, are you dismissing it? I can point you to code that I've written over the last seven days that in terms of complexity and utility, is way beyond what we've been able to push out over Christmas. (eg: https://github.com/mitsuhiko/tankgame which is public)
Like, how can you doubt this? It just boggles my mind.
@mitsuhiko amazing. you plagiarized a game-jam game in about the amount of time that a game jam usually takes to run. truly our society will be revolutionized
-
@mitsuhiko amazing. you plagiarized a game-jam game in about the amount of time that a game jam usually takes to run. truly our society will be revolutionized
@glyph Please tell me what is plagiarized and also no I wouldn't be able to do it without an LLM over Christmas while also working on actual work. I just couldn't have done it. You might be able to. I can't. And that's a pretty big difference.