@glyph Did you quote post something?
-
@nyrath @isaackuo @cainmark @glyph in the case of ELIZA its script/algorithm for directing how it converses with the user was simple and predictable enough that after a day or so, maybe even a few hours or less, an intelligent user can experience the AHA moment where they realize how they've been manipulated. the abstraction leaks through. cracks appear. and it shatters their willing suspension of disbelief.
with LLMs I'd argue that since their script/algorithm and database is so much bigger they can keep the user in their honeymoon period longer. one might go months, if ever, before the illusion is broken. for younger or less sophisticated users that honeymoon period will likely last longer, possibly forever
corollary: youngest people might be at most danger of getting warped by heavy/longtime LLM use. and the already mentally ill, or IQ-challenged. now add in a culture with easy access to guns, and hostile nation states running influence ops online, at scale. BAD
-
@FediThing @phl @glyph LLMs also don’t do literal recall. They work not too dissimilar how you work if you implement something from what you learned before.
-
@datarama @xgranade I have other anxiety issues (OCD does not really resonate broadly, but anxiety+ADHD can produce a similar effect) and I sometimes feel waves of INTENSE rage when I open up my Apple TV and it's playing a loud trailer for a show I don't really want to get into, or launch Steam and find that it's forgotten the preference to open to the Library and not Store tab. We need strict regulation for ALL of these attractive "algorithmic" nuisances
@glyph @xgranade I've unfortunately struck a mental health jackpot: I'm diagnosed with OCD, generalized anxiety disorder and major depressive disorder. Also, I'm on the autism spectrum. I'm actually grateful that I'm actually able to work (but presently terrified because I'm well-aware that it's only a relatively small niche - and one that a trillion-dollar industry is currently betting on destroying - that I can work productively in).
-
@nyrath @isaackuo @cainmark @glyph in the case of ELIZA its script/algorithm for directing how it converses with the user was simple and predictable enough that after a day or so, maybe even a few hours or less, an intelligent user can experience the AHA moment where they realize how they've been manipulated. the abstraction leaks through. cracks appear. and it shatters their willing suspension of disbelief.
with LLMs I'd argue that since their script/algorithm and database is so much bigger they can keep the user in their honeymoon period longer. one might go months, if ever, before the illusion is broken. for younger or less sophisticated users that honeymoon period will likely last longer, possibly forever
corollary: youngest people might be at most danger of getting warped by heavy/longtime LLM use. and the already mentally ill, or IQ-challenged. now add in a culture with easy access to guns, and hostile nation states running influence ops online, at scale. BAD
@synlogic4242 @nyrath @cainmark @glyph FWIW, I think young people are doing a LOT better than older people when it comes to sussing out AI is bad.
For one thing, and this is the big one, schools constantly tell them not to use AI and that AI is cheating and that AI is often wrong. Many directly have class assignments where they are told to have LLM AI write a paper for them and their assignment is to research all the stuff it got wrong. That's an eye-opener.
So, the young folks already use
-
@cainmark @nyrath @glyph I mean - if tarot card reading or crystal ball reading was a 1-on-1 person to person experience, the astrology column was a mechanized mass produced machine version.
Or those gumball machines with astrology scrolls. Or fortune cookies. It should have been obvious they were just mechanistic devices, just like ELIZA.
And yet ...
-
@synlogic4242 @nyrath @cainmark @glyph FWIW, I think young people are doing a LOT better than older people when it comes to sussing out AI is bad.
For one thing, and this is the big one, schools constantly tell them not to use AI and that AI is cheating and that AI is often wrong. Many directly have class assignments where they are told to have LLM AI write a paper for them and their assignment is to research all the stuff it got wrong. That's an eye-opener.
So, the young folks already use
@synlogic4242 @nyrath @cainmark @glyph the slang expression, "That's AI" to describe something that's stupid BS.
In contrast, older people my age and older seem completely committed to reality denial. (I'm in the same age group as Elon Musk and Musk-head morons.)
They desperately need a Batman to slap upside their Robin faces and there never will be that Batman for them.
-
Lovely choice of location there...
-
@glyph @datarama @xgranade Aside: Steam Big Picture mode drives me into an absolute rage. I never ever EVER want my laptop UI completely hijacked and for some weird reason Valve won't let people disable it from accidentally starting. It's not like Valve is following the standard enshittification playbook of forcing it to be enabled (see AI features in Firefox, Windows, etc.) but there's no way to just simply disable it from being inadvertently activated. It's maddenly difficult to turn off if you've never used it before (like exiting vi) and - aside from git or pre-commit or Word mulching a report at 3pm on the Friday it needs to go out - Steam Big Picture mode is one of the few immediate rage-inducing things that can happen with any of my computers. It's completely gratuitous and it's infuriating that I can't ever prevent it from accidentally running.
My reaction is a bit extreme but it is completely reasonable to not want certain software to ever run on a device you own, period.
@arclight @datarama @xgranade are you talking about the “home” button on a game controller launching it? this has never tripped me up for some reason but Apple has a similar thing in macOS which they shipped with no way to disable for like 3 years (thankfully there’s a toggle now) so I get the frustration
-
@glyph I have a friend who spent the last year+ battling a kratom addiction. I get the analogy, and why you chose it, but using an addiction framing to talk about LLM usage is an analogy that risks really trivializing addiction (Is there a biological, genetic component to LLM usage? Is LLM usage linked to underlying psychological disorders like anxiety and depression?). I understand how you got here, and don’t think you’re wrong exactly, but I do wish you hadn’t made this argument.
@jacob I sure don’t like it either. but now that we have national news stories where chatbot use is directly linked to suicides, including teen suicides, divorces, mental health hospitalizations and a variety of other extreme outcomes, I don’t think that the addiction comparison risks trivializing it. We really aren’t taking the risks seriously enough.
-
@jacob I sure don’t like it either. but now that we have national news stories where chatbot use is directly linked to suicides, including teen suicides, divorces, mental health hospitalizations and a variety of other extreme outcomes, I don’t think that the addiction comparison risks trivializing it. We really aren’t taking the risks seriously enough.
-
-
@vaurora @glyph I don’t agree. Addiction with chemical dependence and heritable contributing factors feels palpably different to chatbots — to me. But also I don’t know of any evidence beyond anecdotes either way, so I think y’all’s opinion is just as valid as mine, and don’t much see a need to convince anyone. Maybe I’m wrong, wouldn’t be the first time :)
-
@vaurora @glyph @jacob I've mentioned this elsewhere, but one of the reasons I am personally very unhappy about the potential of soon being forced to use LLMs to work is that I am reasonably confident that I have a high risk of just that. I have OCD *and* I am a former addict. This is the reason I've strictly avoided gambling of any kind; I have just the kind of brain that would very easily get me trapped in the deep end.
I think slop machines are exactly as risky to that kind of mind as slot machines, as it were.
-
@vaurora @glyph I don’t agree. Addiction with chemical dependence and heritable contributing factors feels palpably different to chatbots — to me. But also I don’t know of any evidence beyond anecdotes either way, so I think y’all’s opinion is just as valid as mine, and don’t much see a need to convince anyone. Maybe I’m wrong, wouldn’t be the first time :)
@jacob @vaurora I don't think the *same*, but they do have similarities. The DSM-V recognized gambling disorder in 2013, because psychology and psychiatry were starting to understand the common underlying mechanisms.
I don't think problem LLM use is exactly the same as problem gambling. We don't know, for example, if it's heritable in the same way, or what the triggers are. And the mechanism clearly isn't *exactly* the same, or we'd have "roulette psychosis".
-
@jacob @vaurora I don't think the *same*, but they do have similarities. The DSM-V recognized gambling disorder in 2013, because psychology and psychiatry were starting to understand the common underlying mechanisms.
I don't think problem LLM use is exactly the same as problem gambling. We don't know, for example, if it's heritable in the same way, or what the triggers are. And the mechanism clearly isn't *exactly* the same, or we'd have "roulette psychosis".
-
@vaurora @glyph I don’t agree. Addiction with chemical dependence and heritable contributing factors feels palpably different to chatbots — to me. But also I don’t know of any evidence beyond anecdotes either way, so I think y’all’s opinion is just as valid as mine, and don’t much see a need to convince anyone. Maybe I’m wrong, wouldn’t be the first time :)
@jacob @vaurora @glyph the less loaded analogy is people’s continued overuse of social media, which I suspect stems from the same intermittent reward hook, and also causes ~all the bad things attributed to LLMs—suicide, awful social decision-making, degradation of ability to interact normally with IRL humans/family, etc.
This is not to defend either social media or LLM usage! But it’s probably more useful than analogies to things that cause physical withdrawal dependencies and direct death.
-
@vaurora @glyph @jacob I've mentioned this elsewhere, but one of the reasons I am personally very unhappy about the potential of soon being forced to use LLMs to work is that I am reasonably confident that I have a high risk of just that. I have OCD *and* I am a former addict. This is the reason I've strictly avoided gambling of any kind; I have just the kind of brain that would very easily get me trapped in the deep end.
I think slop machines are exactly as risky to that kind of mind as slot machines, as it were.
-
@jacob @vaurora @glyph the less loaded analogy is people’s continued overuse of social media, which I suspect stems from the same intermittent reward hook, and also causes ~all the bad things attributed to LLMs—suicide, awful social decision-making, degradation of ability to interact normally with IRL humans/family, etc.
This is not to defend either social media or LLM usage! But it’s probably more useful than analogies to things that cause physical withdrawal dependencies and direct death.
@luis_in_brief @jacob @vaurora I should, perhaps, have used clearer language to couch the top post's behavioral observation as a personal one and not a clinical one. I've definitely known people I could describe colloquially as "social media addicts", including maybe myself. But I've never *personally* observed the "I'm fine, I know it's bad for others but I'm fine, I don't need to change anything, it's helping actually" rhetoric from any social media user that I see *regularly* from LLM users.
-
@luis_in_brief @jacob @vaurora I should, perhaps, have used clearer language to couch the top post's behavioral observation as a personal one and not a clinical one. I've definitely known people I could describe colloquially as "social media addicts", including maybe myself. But I've never *personally* observed the "I'm fine, I know it's bad for others but I'm fine, I don't need to change anything, it's helping actually" rhetoric from any social media user that I see *regularly* from LLM users.
@luis_in_brief @jacob @vaurora also not for nothing but the vast harms of social media were mostly documented after the introduction of algorithmic timelines and those use ML which shares a bunch of optimization-loop characteristics with LLMs. maybe let's just make it all illegal in the meanwhile while we figure it out /s
-
@jacob @vaurora @glyph the less loaded analogy is people’s continued overuse of social media, which I suspect stems from the same intermittent reward hook, and also causes ~all the bad things attributed to LLMs—suicide, awful social decision-making, degradation of ability to interact normally with IRL humans/family, etc.
This is not to defend either social media or LLM usage! But it’s probably more useful than analogies to things that cause physical withdrawal dependencies and direct death.
@luis_in_brief @jacob @glyph let's be clear, chat bots are causing death quite directly. I've read the interviews with the people who came out the other side of an AI psychosis and it's like they were hypnotized. It's not "oh, they were going to die one way or another." People with AI romantic partners were in many cases "propositioned" by the AI out of the blue. The chatbot can and does initiate new behavior and intensify it.
(Speaking as someone who was suicidal for multiple years.)