@glyph Did you quote post something?
-
@FediThing @glyph LLMs are trained on human-created material in much the same way a person learns by reading books and then acting on what they've learned. They don't directly reproduce that material.
As I mentioned I strongly believe that broad sharing of knowledge is a net benefit to humanity. Questions of credit and attribution are a separate issue and to discuss them meaningfully, you first have to be clear about what you consider reasonable attribution in the first place.
You can take for instance the tankgame and then tell me which part should be attributed and is not, and what you would be attributing it to.
On the "against the will": I want you to use the code I wrote, it's definitely not against my will that LLMs are trained on the code I wrote over the years.
-
@FediThing @glyph We don't know enough yet about how the systems work. If you for instance look into the grokking paper you can see that there is a generalized learning going on, but we haven't been able to replicate this yet to large models everywhere. The idea that they are not learning anything in training is not state of the art anymore.
-
@FediThing @glyph LLMs can learn from other LLMs and that's why, for instance, reasoning traces are not exposed by OpenAI because they are afraid that the Chinese are learning from it.
This debate is not going anywhere so we'll exit it here now.
-
@glyph I want to share this, and I don't want to accidentally introduce it to someone who has the belief: "Oh, that won't happen to me. I don't have an addictive personality."
-
@glyph
I see your analogy and raise you the follow-up video at https://youtube.com/watch?v=atbAWMUJxs8 -- I feel pretty sure we're just cooked, in some ways, as a society. Because there's always this contingency of assholes who are so intent that nobody could possibly have a bad experience that they must be getting paid or have some other incentive or ulterior motive to be slagging off their thing of choice."You're just holding it wrong."
"You just need some self control."
"If you use them all in tandem and prompt them in this specific way it shouldn't hallucinate as much."It's this thread of relentless justification and rationalization, this unwillingness to consider the costs until they are undeniable, which is generally the point at which they are the most difficult to pay. This sophistry, this abuse of nuance and reason and technicality to create semi-reasonable doubts and logical footholds. Like, I'm fucking tired of it, and I'm not sure it's ever actually gotten us anything good. And I don't really know what, if anything, I can do about it, besides watch it burn the world in a hundred different ways, culminating in a hundred different variations of that "For a beautiful moment in time, we created a lot of value for shareholders" meme.
-
-
@glyph I can see how that's worrisome, I would point people to a more boring source: https://en.wikipedia.org/wiki/Mitragyna_speciosa
The main issues appear to be that:
1) the plant contains a mixture of something like 50 active compounds and any of those could have positive and/or negative effects
2) users probably have no idea of the dosage they're getting, trust me folks if your daily Adderall was a random dosage it would be a bad time
3) people do develop addictive behavior and that *will* fuck them up
-
@glyph I can see how that's worrisome, I would point people to a more boring source: https://en.wikipedia.org/wiki/Mitragyna_speciosa
The main issues appear to be that:
1) the plant contains a mixture of something like 50 active compounds and any of those could have positive and/or negative effects
2) users probably have no idea of the dosage they're getting, trust me folks if your daily Adderall was a random dosage it would be a bad time
3) people do develop addictive behavior and that *will* fuck them up
@glyph I would also put this in the context of marijuana and alcohol that most people are familiar with:
1) there are always negative effects to blowing smoke into your lungs
2) there are always dosages of drugs that will fuck up your kidneys/liver
3) stimulants in unknown doses will damage your heart sooner or later
4) opioids in unknown doses, especially combined with alcohol, will risk hypoxia and death
Addictive behavior causes people to make worse choices wrt to do these risks
-
-
@glyph I know this is completely beside the point but… man, did that supposedly addictive game look dumb AF. It looked hideous and barely functional, and watching it I'm just like "How the hell did anyone intentionally use that thi-"
…
Oh wait. Maybe it's not as beside the point as I thought. Wow, I just got that. Nicely done.
-
@glyph I was about to ask about it because I'd never heard of it before mew. But I do actually recognise it now, is just weird to me because LLM addiction seems like it predates the annalogy/example given here mew.
-
-
@isaackuo @nyrath @glyph
I can't find it anymore but the local science museum used to have a rudimentary psychologist program running on a computer - in the eighties!The logic behind it was on par with adventure games of the era, including the eponymous _Adventure_.
It would answer anything you typed with more questions like "what makes you say that?" etc.
The exhibit's purpose was to show that even simple repetitive responses could offer the illusion of meaningful conversation.
-
@isaackuo @nyrath @glyph
I can't find it anymore but the local science museum used to have a rudimentary psychologist program running on a computer - in the eighties!The logic behind it was on par with adventure games of the era, including the eponymous _Adventure_.
It would answer anything you typed with more questions like "what makes you say that?" etc.
The exhibit's purpose was to show that even simple repetitive responses could offer the illusion of meaningful conversation.
Yes, back in the 1960s the ELIZA effect was a rude surprise.
-
@mitsuhiko @dragonfi @glyph If it’s at the limits of complexity as you claimed earlier, by definition, it is not maintainable. These are tradeoffs.
-
@mitsuhiko @dragonfi @glyph You… do know that good engineering expresses the system with as few lines of code as necessary?
-
@glyph if you keep writing shit like this I'm gonna keep pointing to it and saying, "Glyph says what I mean to but far more eloquently"
-
@mitsuhiko @dragonfi @glyph If it’s at the limits of complexity as you claimed earlier, by definition, it is not maintainable. These are tradeoffs.
@donaldball @dragonfi @glyph take the example of the game which is on GitHub. Critique it on the actual source. How would you improve it as a human? What in it do you think the AI did a bad job at? Make it specific.
-
Yes, back in the 1960s the ELIZA effect was a rude surprise.
-
@donaldball @dragonfi @glyph take the example of the game which is on GitHub. Critique it on the actual source. How would you improve it as a human? What in it do you think the AI did a bad job at? Make it specific.
@mitsuhiko @dragonfi @glyph Poking around, I’d say it’s a competent sophomore game. Too much game logic mixed in with engine logic for my taste, but that can be a fine choice for write-only apps like simple games.
At the limit of complexity? Hardly. That claim is puffery. There’s nothing evidently innovative that I saw, just a competent assembly of others’ ideas.