@glyph Did you quote post something?
-
@glyph I never want to cast shade on addicts for being addicted, addictions are fucking awful.
I absolutely will cast shade on addicts *or anyone else* for insisting I should be addicted too, and for transforming all of society around the idea that my being addicted is a good thing.
-
@glyph I just keep coming back to that LLMs put our brains directly on the inner loop of an optimization algorithm — that's already true to some degree with advertising and social media engagement algorithms, but LLMs tighten that loop even more, and we don't know what that does to brains!
-
@glyph This feels about right, and with this, the next trick is building a culture of healing and support for when those who have fallen prey to this addiction are ready for change.
-
@glyph This feels about right, and with this, the next trick is building a culture of healing and support for when those who have fallen prey to this addiction are ready for change.
@mttaggart unfortunately I have no concept of what "rock bottom" for an LLM user looks like. Step 0 is going to have to be that society stops massively rewarding them with prestige and huge piles of cash first
-
@glyph I just keep coming back to that LLMs put our brains directly on the inner loop of an optimization algorithm — that's already true to some degree with advertising and social media engagement algorithms, but LLMs tighten that loop even more, and we don't know what that does to brains!
@xgranade looks like we're gonna find out
-
@xgranade looks like we're gonna find out
@glyph Maybe! Or maybe it'll be like leaded gasoline where we never really are able to trace back which awful things were due to that source of lead and which things weren't because it all gets mixed up in the "heavy metal in head, things bad now" bucket.
-
@xgranade to this point I have had kratom users — mostly indirectly, I am not particularly close with any — suggest that I try it because it is "more effective" and "easier to get" than prescription ADHD meds. And I'd definitely be lying if I didn't say I feel a *strong* pull towards believing that. It would be very nice to solve all my problems with a pill or a prompt
-
@glyph I never want to cast shade on addicts for being addicted, addictions are fucking awful.
I absolutely will cast shade on addicts *or anyone else* for insisting I should be addicted too, and for transforming all of society around the idea that my being addicted is a good thing.
-
@mttaggart unfortunately I have no concept of what "rock bottom" for an LLM user looks like. Step 0 is going to have to be that society stops massively rewarding them with prestige and huge piles of cash first
@glyph Absolutely, but even in that we can make out the general shape of the thing. We know this current model is not economically sustainable by any party. For users, the result will be inability to access models, or paying cripplingly high prices to do so. Option 1 will elucidate their inability to function without the models, and option 2 will impose the kinds of costs that look like rock bottom in other addictions.
-
@glyph Absolutely, but even in that we can make out the general shape of the thing. We know this current model is not economically sustainable by any party. For users, the result will be inability to access models, or paying cripplingly high prices to do so. Option 1 will elucidate their inability to function without the models, and option 2 will impose the kinds of costs that look like rock bottom in other addictions.
@mttaggart my suspicion is that when this happens we are going to find out that there is a huge predisposition component. there will be people who say "ah well. guess I'm a little rusty writing unit tests now, but time to get back at it" and we will have people who will go to sleep crying tears of frustration for the rest of their lives as they struggle to reclaim the feeling of being predictably productive again
-
@glyph Maybe! Or maybe it'll be like leaded gasoline where we never really are able to trace back which awful things were due to that source of lead and which things weren't because it all gets mixed up in the "heavy metal in head, things bad now" bucket.
@xgranade definitely going to make a killing when I put all this low-linear-algebra steel back on the market
-
@glyph I just keep coming back to that LLMs put our brains directly on the inner loop of an optimization algorithm — that's already true to some degree with advertising and social media engagement algorithms, but LLMs tighten that loop even more, and we don't know what that does to brains!
-
@mttaggart my suspicion is that when this happens we are going to find out that there is a huge predisposition component. there will be people who say "ah well. guess I'm a little rusty writing unit tests now, but time to get back at it" and we will have people who will go to sleep crying tears of frustration for the rest of their lives as they struggle to reclaim the feeling of being predictably productive again
@glyph Sure, although that first group may be said to be users, but not addicts, if that's the case. My suspicion is that time of use will be a factor there.
-
@glyph I just keep coming back to that LLMs put our brains directly on the inner loop of an optimization algorithm — that's already true to some degree with advertising and social media engagement algorithms, but LLMs tighten that loop even more, and we don't know what that does to brains!
-
@datarama @xgranade I have other anxiety issues (OCD does not really resonate broadly, but anxiety+ADHD can produce a similar effect) and I sometimes feel waves of INTENSE rage when I open up my Apple TV and it's playing a loud trailer for a show I don't really want to get into, or launch Steam and find that it's forgotten the preference to open to the Library and not Store tab. We need strict regulation for ALL of these attractive "algorithmic" nuisances
-
@clew @xgranade I mean with the GPT-5 / r/MyBoyfriendIsAI debacle (which is, I suppose, still ongoing) we saw that sometimes it literally is just _exactly_ love bombing. But there is a pretty significant difference in degree (if not, necessarily, in kind?) with telling the robot to write your CSS for you, rather than telling it to validate your feelings
-
@clew @xgranade I mean with the GPT-5 / r/MyBoyfriendIsAI debacle (which is, I suppose, still ongoing) we saw that sometimes it literally is just _exactly_ love bombing. But there is a pretty significant difference in degree (if not, necessarily, in kind?) with telling the robot to write your CSS for you, rather than telling it to validate your feelings
-
@xgranade looks like we're gonna find out
-
@glyph I just keep coming back to that LLMs put our brains directly on the inner loop of an optimization algorithm — that's already true to some degree with advertising and social media engagement algorithms, but LLMs tighten that loop even more, and we don't know what that does to brains!