@glyph Did you quote post something?
-
Needless to say, if somebody time traveled back to the 1950s and tried to explain this to the people back then, they would have thought you were crazy...
-
@glyph a lot of things that are deeply fun have those properties. Computer games, even going to gym can be that way. It’s a very exciting time and it’s deeply enjoyable to play with these things. While simultaneously being useful. I’m not sure how I can break that to you.
@mitsuhiko now you're just doing this dril tweet https://en.wikiquote.org/wiki/Dril#:~:text=the%20wise%20man%20bowed%20his%20head%20solemnly
justifying your addiction with moral relativism and an appeal to a benefit that I do not think is, on net, good.
I think that as a society we've got our arms around gaming & the gym, there's plenty of data about how addictive those things are. (And also about how "computer games" is a pretty big bucket, where you can find a tremendous amount of gamblification right now, which is just as bad if not worse as LLMs)
-
The vast majority of LLM usage isn't even going to be voluntary or self-rationalized.
Google is by far the world's most popular search engine, used by literally billions of people per day and Google is busy rolling out their excretable "AI overviews" globally.
The vast majority of "LLM users" couldn't tell you what an LLM even is and are not entering any sort of deliberate decision to engage with an LLM or really thinking about it at all.
They're just going to be searching for something and then reading the thing that is given to them at the top of the results. They are going to be funneled to an LLM when they seek out customer service or technical support or apply for a job or accept the 'assistance' jovially offered to them in their word processor or PDF viewer or right in their operating system.
You can can convince someone to not use ChatGPT perhaps, but I'm really not sure what anyone is going to do about the seemingly universal goal of every technical giant on earth to redefine how we interact with information through the interface of AI.
In a not too distant future it will become very difficult for the average person to know where AI even begins and ends.
-
@mitsuhiko now you're just doing this dril tweet https://en.wikiquote.org/wiki/Dril#:~:text=the%20wise%20man%20bowed%20his%20head%20solemnly
justifying your addiction with moral relativism and an appeal to a benefit that I do not think is, on net, good.
I think that as a society we've got our arms around gaming & the gym, there's plenty of data about how addictive those things are. (And also about how "computer games" is a pretty big bucket, where you can find a tremendous amount of gamblification right now, which is just as bad if not worse as LLMs)
@mitsuhiko one of the reasons I find this so upsetting is that your argument here is so obviously missing the point that it makes me scared that these things can so terribly damage your metacognition that this seems like reasonable things to say. *I* still want to experiment with them to develop a better understanding, and this kind of public rhetoric makes me feel like I might be poisoning my own brain to do so!
-
@mitsuhiko one of the reasons I find this so upsetting is that your argument here is so obviously missing the point that it makes me scared that these things can so terribly damage your metacognition that this seems like reasonable things to say. *I* still want to experiment with them to develop a better understanding, and this kind of public rhetoric makes me feel like I might be poisoning my own brain to do so!
@mitsuhiko I could *easily* accept an argument like "we all have to make decisions under uncertainty and *in my experience*, accounting for subjective distortion as best I can, there has been a big net benefit. we're going to have to agree to disagree until someone does a more comprehensive study; I'll gather more data on my own use in the meanwhile"
but your insistence that I recognize these anecdotal examples (which I *already acknowledged repeatedly*) as *proof* of net benefit is scary
-
@glyph Absolutely, but even in that we can make out the general shape of the thing. We know this current model is not economically sustainable by any party. For users, the result will be inability to access models, or paying cripplingly high prices to do so. Option 1 will elucidate their inability to function without the models, and option 2 will impose the kinds of costs that look like rock bottom in other addictions.
@mttaggart @glyph If there is such an unwinding I think users that can't afford premium service providers will fall back to free/subsidized providers and tools that run on-device. A whole spectrum rather than a binary have / have not.
-
My old boss became very reliant on LLMs…even though she could clearly see their limitations in terms of the field we were working in. I’m talking 6 mths ago and I appreciate that improvements to LLms are occurring weekly if not daily. But while she lapped them up (she was dyslexic and used to try and hide it by getting others to do all her written communications) I found LLMs left me frustrated - a lot of time wasted on prompts that led to diminishing returns.
-
@glyph I wonder how long before "never used LLM" is a positive line item on a resume... 😂
-
-
@glyph if you’re not familiar, searching for “variable intermittent reinforcement” might be informative. It’s the same mechanism behind slot machines, Facebook notifications, and email.
-
@glyph if you’re not familiar, searching for “variable intermittent reinforcement” might be informative. It’s the same mechanism behind slot machines, Facebook notifications, and email.
-
@genehack the phrasing and adjacency to "slot-machine" as an adjective was not an accident :)
-
@glyph so apparently I get to be glad I forgot to ever try kratom despite repeated urging from a friend several years back.
Thanks, ADHD!
-
@glyph if you’re not familiar, searching for “variable intermittent reinforcement” might be informative. It’s the same mechanism behind slot machines, Facebook notifications, and email.
-
@genehack the phrasing and adjacency to "slot-machine" as an adjective was not an accident :)
@glyph sure, and I read that post — that area is something where I have personal experience (some time at NIDA, learned a thing or two…) so the parallels resonate stronger for me, and I’m maybe more attuned than usual to the resulting dangers.
Glad you’re aware of all of that, I think it’s an aspect many folks miss.
-
@glyph sure, and I read that post — that area is something where I have personal experience (some time at NIDA, learned a thing or two…) so the parallels resonate stronger for me, and I’m maybe more attuned than usual to the resulting dangers.
Glad you’re aware of all of that, I think it’s an aspect many folks miss.
@genehack Gotcha. By the way, if there's a subtle distinction in between "variable intermittent reinforcement" and "intermittent reward schedule" as terms of art I'd definitely be curious to hear it — to my understanding they are strictly synonyms and used interchangeably.
-
@genehack Gotcha. By the way, if there's a subtle distinction in between "variable intermittent reinforcement" and "intermittent reward schedule" as terms of art I'd definitely be curious to hear it — to my understanding they are strictly synonyms and used interchangeably.
@glyph as far as I know — as an adjacent non-expert person — they’re synonyms.
The way it was explained to me involved a rat in a cage with a food pellet lever. If pushing the lever resulted in a food pellet every time, rat pushed the lever when it was hungry. If pushing equaled no pellet, rat pushed it a few time then lost interest and stopped forever. If the lever push was 50-50 (or even worse odds) food pellet vs nothing, rat would sit there all day and push the lever over and over.
-
@glyph I mean, I don't know what to tell you. Are you still really doubting that these things are useful? I've written so many times about this now, are you dismissing it? I can point you to code that I've written over the last seven days that in terms of complexity and utility, is way beyond what we've been able to push out over Christmas. (eg: https://github.com/mitsuhiko/tankgame which is public)
Like, how can you doubt this? It just boggles my mind.
@mitsuhiko @glyph A greenfield project seems like a good way to trial new technology but it's not a real world condition. Greenfield eventually turns to brownfield. Time and complexity shows where simple systems begin to break down.
Once that happens the AI tools become exponentially worse at making changes. Limited context windows. Complex logic changing over time. Updated dependencies.
And the human may lack the knowledge or ability to pick up the slack: https://leadershiplighthouse.substack.com/p/i-went-all-in-on-ai-the-mit-study
-
@genehack Gotcha. By the way, if there's a subtle distinction in between "variable intermittent reinforcement" and "intermittent reward schedule" as terms of art I'd definitely be curious to hear it — to my understanding they are strictly synonyms and used interchangeably.
The main difference is that "reinforcement" and "reward" are not the same thing - reinforcement is a process meant to encourage a behavior, but it doesn't have to mean giving a reward. It could for instance involve removing an aversive stimulus - this is called negative reinforcement, one of the most misunderstood terms in psychology. So saying something is "intermittent reward" is more specific than saying it's "intermittent reinforcement".
-
The main difference is that "reinforcement" and "reward" are not the same thing - reinforcement is a process meant to encourage a behavior, but it doesn't have to mean giving a reward. It could for instance involve removing an aversive stimulus - this is called negative reinforcement, one of the most misunderstood terms in psychology. So saying something is "intermittent reward" is more specific than saying it's "intermittent reinforcement".