@glyph Did you quote post something?
-
@synlogic4242 @nyrath @cainmark @glyph FWIW, I think young people are doing a LOT better than older people when it comes to sussing out AI is bad.
For one thing, and this is the big one, schools constantly tell them not to use AI and that AI is cheating and that AI is often wrong. Many directly have class assignments where they are told to have LLM AI write a paper for them and their assignment is to research all the stuff it got wrong. That's an eye-opener.
So, the young folks already use
@synlogic4242 @nyrath @cainmark @glyph the slang expression, "That's AI" to describe something that's stupid BS.
In contrast, older people my age and older seem completely committed to reality denial. (I'm in the same age group as Elon Musk and Musk-head morons.)
They desperately need a Batman to slap upside their Robin faces and there never will be that Batman for them.
-
Lovely choice of location there...
-
@glyph @datarama @xgranade Aside: Steam Big Picture mode drives me into an absolute rage. I never ever EVER want my laptop UI completely hijacked and for some weird reason Valve won't let people disable it from accidentally starting. It's not like Valve is following the standard enshittification playbook of forcing it to be enabled (see AI features in Firefox, Windows, etc.) but there's no way to just simply disable it from being inadvertently activated. It's maddenly difficult to turn off if you've never used it before (like exiting vi) and - aside from git or pre-commit or Word mulching a report at 3pm on the Friday it needs to go out - Steam Big Picture mode is one of the few immediate rage-inducing things that can happen with any of my computers. It's completely gratuitous and it's infuriating that I can't ever prevent it from accidentally running.
My reaction is a bit extreme but it is completely reasonable to not want certain software to ever run on a device you own, period.
@arclight @datarama @xgranade are you talking about the “home” button on a game controller launching it? this has never tripped me up for some reason but Apple has a similar thing in macOS which they shipped with no way to disable for like 3 years (thankfully there’s a toggle now) so I get the frustration
-
@glyph I have a friend who spent the last year+ battling a kratom addiction. I get the analogy, and why you chose it, but using an addiction framing to talk about LLM usage is an analogy that risks really trivializing addiction (Is there a biological, genetic component to LLM usage? Is LLM usage linked to underlying psychological disorders like anxiety and depression?). I understand how you got here, and don’t think you’re wrong exactly, but I do wish you hadn’t made this argument.
@jacob I sure don’t like it either. but now that we have national news stories where chatbot use is directly linked to suicides, including teen suicides, divorces, mental health hospitalizations and a variety of other extreme outcomes, I don’t think that the addiction comparison risks trivializing it. We really aren’t taking the risks seriously enough.
-
@jacob I sure don’t like it either. but now that we have national news stories where chatbot use is directly linked to suicides, including teen suicides, divorces, mental health hospitalizations and a variety of other extreme outcomes, I don’t think that the addiction comparison risks trivializing it. We really aren’t taking the risks seriously enough.
-
-
@vaurora @glyph I don’t agree. Addiction with chemical dependence and heritable contributing factors feels palpably different to chatbots — to me. But also I don’t know of any evidence beyond anecdotes either way, so I think y’all’s opinion is just as valid as mine, and don’t much see a need to convince anyone. Maybe I’m wrong, wouldn’t be the first time :)
-
@vaurora @glyph @jacob I've mentioned this elsewhere, but one of the reasons I am personally very unhappy about the potential of soon being forced to use LLMs to work is that I am reasonably confident that I have a high risk of just that. I have OCD *and* I am a former addict. This is the reason I've strictly avoided gambling of any kind; I have just the kind of brain that would very easily get me trapped in the deep end.
I think slop machines are exactly as risky to that kind of mind as slot machines, as it were.
-
@vaurora @glyph I don’t agree. Addiction with chemical dependence and heritable contributing factors feels palpably different to chatbots — to me. But also I don’t know of any evidence beyond anecdotes either way, so I think y’all’s opinion is just as valid as mine, and don’t much see a need to convince anyone. Maybe I’m wrong, wouldn’t be the first time :)
@jacob @vaurora I don't think the *same*, but they do have similarities. The DSM-V recognized gambling disorder in 2013, because psychology and psychiatry were starting to understand the common underlying mechanisms.
I don't think problem LLM use is exactly the same as problem gambling. We don't know, for example, if it's heritable in the same way, or what the triggers are. And the mechanism clearly isn't *exactly* the same, or we'd have "roulette psychosis".
-
@jacob @vaurora I don't think the *same*, but they do have similarities. The DSM-V recognized gambling disorder in 2013, because psychology and psychiatry were starting to understand the common underlying mechanisms.
I don't think problem LLM use is exactly the same as problem gambling. We don't know, for example, if it's heritable in the same way, or what the triggers are. And the mechanism clearly isn't *exactly* the same, or we'd have "roulette psychosis".
-
@vaurora @glyph I don’t agree. Addiction with chemical dependence and heritable contributing factors feels palpably different to chatbots — to me. But also I don’t know of any evidence beyond anecdotes either way, so I think y’all’s opinion is just as valid as mine, and don’t much see a need to convince anyone. Maybe I’m wrong, wouldn’t be the first time :)
@jacob @vaurora @glyph the less loaded analogy is people’s continued overuse of social media, which I suspect stems from the same intermittent reward hook, and also causes ~all the bad things attributed to LLMs—suicide, awful social decision-making, degradation of ability to interact normally with IRL humans/family, etc.
This is not to defend either social media or LLM usage! But it’s probably more useful than analogies to things that cause physical withdrawal dependencies and direct death.
-
@vaurora @glyph @jacob I've mentioned this elsewhere, but one of the reasons I am personally very unhappy about the potential of soon being forced to use LLMs to work is that I am reasonably confident that I have a high risk of just that. I have OCD *and* I am a former addict. This is the reason I've strictly avoided gambling of any kind; I have just the kind of brain that would very easily get me trapped in the deep end.
I think slop machines are exactly as risky to that kind of mind as slot machines, as it were.
-
@jacob @vaurora @glyph the less loaded analogy is people’s continued overuse of social media, which I suspect stems from the same intermittent reward hook, and also causes ~all the bad things attributed to LLMs—suicide, awful social decision-making, degradation of ability to interact normally with IRL humans/family, etc.
This is not to defend either social media or LLM usage! But it’s probably more useful than analogies to things that cause physical withdrawal dependencies and direct death.
@luis_in_brief @jacob @vaurora I should, perhaps, have used clearer language to couch the top post's behavioral observation as a personal one and not a clinical one. I've definitely known people I could describe colloquially as "social media addicts", including maybe myself. But I've never *personally* observed the "I'm fine, I know it's bad for others but I'm fine, I don't need to change anything, it's helping actually" rhetoric from any social media user that I see *regularly* from LLM users.
-
@luis_in_brief @jacob @vaurora I should, perhaps, have used clearer language to couch the top post's behavioral observation as a personal one and not a clinical one. I've definitely known people I could describe colloquially as "social media addicts", including maybe myself. But I've never *personally* observed the "I'm fine, I know it's bad for others but I'm fine, I don't need to change anything, it's helping actually" rhetoric from any social media user that I see *regularly* from LLM users.
@luis_in_brief @jacob @vaurora also not for nothing but the vast harms of social media were mostly documented after the introduction of algorithmic timelines and those use ML which shares a bunch of optimization-loop characteristics with LLMs. maybe let's just make it all illegal in the meanwhile while we figure it out /s
-
@jacob @vaurora @glyph the less loaded analogy is people’s continued overuse of social media, which I suspect stems from the same intermittent reward hook, and also causes ~all the bad things attributed to LLMs—suicide, awful social decision-making, degradation of ability to interact normally with IRL humans/family, etc.
This is not to defend either social media or LLM usage! But it’s probably more useful than analogies to things that cause physical withdrawal dependencies and direct death.
@luis_in_brief @jacob @glyph let's be clear, chat bots are causing death quite directly. I've read the interviews with the people who came out the other side of an AI psychosis and it's like they were hypnotized. It's not "oh, they were going to die one way or another." People with AI romantic partners were in many cases "propositioned" by the AI out of the blue. The chatbot can and does initiate new behavior and intensify it.
(Speaking as someone who was suicidal for multiple years.)
-
@luis_in_brief @jacob @glyph let's be clear, chat bots are causing death quite directly. I've read the interviews with the people who came out the other side of an AI psychosis and it's like they were hypnotized. It's not "oh, they were going to die one way or another." People with AI romantic partners were in many cases "propositioned" by the AI out of the blue. The chatbot can and does initiate new behavior and intensify it.
(Speaking as someone who was suicidal for multiple years.)
@vaurora @jacob @glyph yes, and so is social media, as well as all sorts of other deadly mental health issues like anorexia, SWATing, and vaccine denial. (Meta briefly had a task force about anorexia before they realized any plausible solution would basically push teen girls off the platform, and then everyone involved signed very big NDAs and was fired.) Again, this is not to let LLMs off the hook, but they're much closer to social media than hard drugs.
-
@glyph I learned about Kratom from a podcast about a year ago. I can't remember which one, but having looked into the science, the addiction curve on that stuff is indeed frightening.
-
@luis_in_brief @jacob @vaurora also not for nothing but the vast harms of social media were mostly documented after the introduction of algorithmic timelines and those use ML which shares a bunch of optimization-loop characteristics with LLMs. maybe let's just make it all illegal in the meanwhile while we figure it out /s
-
@luis_in_brief @jacob @vaurora the sarcasm was more about the unrealistic scale of the effort (particularly in the current political environment) than the earnestness of the project. At the *very least* there ought to be transparency requirements and user control. Alas.
-
@vaurora @jacob @glyph yes, and so is social media, as well as all sorts of other deadly mental health issues like anorexia, SWATing, and vaccine denial. (Meta briefly had a task force about anorexia before they realized any plausible solution would basically push teen girls off the platform, and then everyone involved signed very big NDAs and was fired.) Again, this is not to let LLMs off the hook, but they're much closer to social media than hard drugs.
@luis_in_brief @vaurora @jacob I think different metaphors make sense in different contexts. For what I was trying to communicate, it just doesn't line up. In general I think "problem gambling" is the best overall metaphor, given that it can be a solitary activity. But then, although I don't know anyone with that type of hyper-maladaptive relationship with social media, I also don't know anyone who participated in the rohingya genocide, so, my experience is limited.